Test Report: KVM_Linux_crio 17740

                    
                      6db73b2c9af5fe00de7b62f5c00df582e8611f1d:2023-12-06:32175
                    
                

Test fail (30/305)

Order failed test Duration
35 TestAddons/parallel/Ingress 153.73
48 TestAddons/StoppedEnableDisable 155.02
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.6
138 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.38
141 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.29
164 TestIngressAddonLegacy/serial/ValidateIngressAddons 164.89
212 TestMultiNode/serial/PingHostFrom2Pods 3.34
219 TestMultiNode/serial/RestartKeepsNodes 693.02
221 TestMultiNode/serial/StopMultiNode 143.69
228 TestPreload 274.57
234 TestRunningBinaryUpgrade 139.64
253 TestStoppedBinaryUpgrade/Upgrade 269.41
333 TestStartStop/group/no-preload/serial/Stop 139.78
336 TestStartStop/group/old-k8s-version/serial/Stop 140.04
340 TestStartStop/group/default-k8s-diff-port/serial/Stop 140.32
342 TestStartStop/group/embed-certs/serial/Stop 139.68
343 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.42
344 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 12.42
347 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.42
348 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.42
351 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 543.34
352 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 543.37
353 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 543.29
354 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.31
355 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 427.51
356 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 365.55
357 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 327.45
358 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 246.47
363 TestStartStop/group/newest-cni/serial/Stop 140.55
364 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 12.42
x
+
TestAddons/parallel/Ingress (153.73s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-463584 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-463584 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-463584 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [d83abb50-84f5-4145-8a0d-153f7205e73e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [d83abb50-84f5-4145-8a0d-153f7205e73e] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.018845528s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p addons-463584 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-463584 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m8.977757301s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:277: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:285: (dbg) Run:  kubectl --context addons-463584 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p addons-463584 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.39.94
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p addons-463584 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p addons-463584 addons disable ingress-dns --alsologtostderr -v=1: (1.318176759s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p addons-463584 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p addons-463584 addons disable ingress --alsologtostderr -v=1: (7.911273406s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-463584 -n addons-463584
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-463584 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-463584 logs -n 25: (1.451957809s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-324691 | jenkins | v1.32.0 | 06 Dec 23 18:40 UTC |                     |
	|         | -p download-only-324691                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.1                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.32.0 | 06 Dec 23 18:40 UTC | 06 Dec 23 18:40 UTC |
	| delete  | -p download-only-324691                                                                     | download-only-324691 | jenkins | v1.32.0 | 06 Dec 23 18:40 UTC | 06 Dec 23 18:40 UTC |
	| delete  | -p download-only-324691                                                                     | download-only-324691 | jenkins | v1.32.0 | 06 Dec 23 18:40 UTC | 06 Dec 23 18:40 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-106585 | jenkins | v1.32.0 | 06 Dec 23 18:40 UTC |                     |
	|         | binary-mirror-106585                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:39041                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-106585                                                                     | binary-mirror-106585 | jenkins | v1.32.0 | 06 Dec 23 18:40 UTC | 06 Dec 23 18:40 UTC |
	| addons  | enable dashboard -p                                                                         | addons-463584        | jenkins | v1.32.0 | 06 Dec 23 18:40 UTC |                     |
	|         | addons-463584                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-463584        | jenkins | v1.32.0 | 06 Dec 23 18:40 UTC |                     |
	|         | addons-463584                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-463584 --wait=true                                                                | addons-463584        | jenkins | v1.32.0 | 06 Dec 23 18:40 UTC | 06 Dec 23 18:43 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-463584        | jenkins | v1.32.0 | 06 Dec 23 18:43 UTC | 06 Dec 23 18:43 UTC |
	|         | -p addons-463584                                                                            |                      |         |         |                     |                     |
	| addons  | addons-463584 addons                                                                        | addons-463584        | jenkins | v1.32.0 | 06 Dec 23 18:43 UTC | 06 Dec 23 18:43 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-463584        | jenkins | v1.32.0 | 06 Dec 23 18:43 UTC | 06 Dec 23 18:43 UTC |
	|         | addons-463584                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-463584        | jenkins | v1.32.0 | 06 Dec 23 18:43 UTC | 06 Dec 23 18:43 UTC |
	|         | -p addons-463584                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-463584 ip                                                                            | addons-463584        | jenkins | v1.32.0 | 06 Dec 23 18:43 UTC | 06 Dec 23 18:43 UTC |
	| addons  | addons-463584 addons disable                                                                | addons-463584        | jenkins | v1.32.0 | 06 Dec 23 18:43 UTC | 06 Dec 23 18:43 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-463584 ssh cat                                                                       | addons-463584        | jenkins | v1.32.0 | 06 Dec 23 18:43 UTC | 06 Dec 23 18:43 UTC |
	|         | /opt/local-path-provisioner/pvc-f2e7e006-6181-4bbb-9764-32096133f2ae_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-463584 addons disable                                                                | addons-463584        | jenkins | v1.32.0 | 06 Dec 23 18:43 UTC | 06 Dec 23 18:44 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-463584        | jenkins | v1.32.0 | 06 Dec 23 18:43 UTC | 06 Dec 23 18:43 UTC |
	|         | addons-463584                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-463584 ssh curl -s                                                                   | addons-463584        | jenkins | v1.32.0 | 06 Dec 23 18:44 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-463584 addons disable                                                                | addons-463584        | jenkins | v1.32.0 | 06 Dec 23 18:44 UTC | 06 Dec 23 18:44 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-463584 addons                                                                        | addons-463584        | jenkins | v1.32.0 | 06 Dec 23 18:44 UTC | 06 Dec 23 18:44 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-463584 addons                                                                        | addons-463584        | jenkins | v1.32.0 | 06 Dec 23 18:44 UTC | 06 Dec 23 18:44 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-463584 ip                                                                            | addons-463584        | jenkins | v1.32.0 | 06 Dec 23 18:46 UTC | 06 Dec 23 18:46 UTC |
	| addons  | addons-463584 addons disable                                                                | addons-463584        | jenkins | v1.32.0 | 06 Dec 23 18:46 UTC | 06 Dec 23 18:46 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-463584 addons disable                                                                | addons-463584        | jenkins | v1.32.0 | 06 Dec 23 18:46 UTC | 06 Dec 23 18:46 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/06 18:40:49
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 18:40:49.116299   71244 out.go:296] Setting OutFile to fd 1 ...
	I1206 18:40:49.116465   71244 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 18:40:49.116480   71244 out.go:309] Setting ErrFile to fd 2...
	I1206 18:40:49.116492   71244 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 18:40:49.116699   71244 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17740-63652/.minikube/bin
	I1206 18:40:49.117341   71244 out.go:303] Setting JSON to false
	I1206 18:40:49.118143   71244 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":4999,"bootTime":1701883050,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 18:40:49.118206   71244 start.go:138] virtualization: kvm guest
	I1206 18:40:49.120397   71244 out.go:177] * [addons-463584] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1206 18:40:49.121782   71244 out.go:177]   - MINIKUBE_LOCATION=17740
	I1206 18:40:49.121795   71244 notify.go:220] Checking for updates...
	I1206 18:40:49.123116   71244 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 18:40:49.124567   71244 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17740-63652/kubeconfig
	I1206 18:40:49.125936   71244 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17740-63652/.minikube
	I1206 18:40:49.127256   71244 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 18:40:49.128862   71244 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 18:40:49.130509   71244 driver.go:392] Setting default libvirt URI to qemu:///system
	I1206 18:40:49.163632   71244 out.go:177] * Using the kvm2 driver based on user configuration
	I1206 18:40:49.164924   71244 start.go:298] selected driver: kvm2
	I1206 18:40:49.164945   71244 start.go:902] validating driver "kvm2" against <nil>
	I1206 18:40:49.164960   71244 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 18:40:49.165713   71244 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 18:40:49.165887   71244 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17740-63652/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1206 18:40:49.180546   71244 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1206 18:40:49.180644   71244 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1206 18:40:49.180930   71244 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 18:40:49.180997   71244 cni.go:84] Creating CNI manager for ""
	I1206 18:40:49.181018   71244 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 18:40:49.181032   71244 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1206 18:40:49.181049   71244 start_flags.go:323] config:
	{Name:addons-463584 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-463584 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 18:40:49.181217   71244 iso.go:125] acquiring lock: {Name:mk6e9c7dc90243dab7d2a6f322b4b6abe4dff6ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 18:40:49.182948   71244 out.go:177] * Starting control plane node addons-463584 in cluster addons-463584
	I1206 18:40:49.184220   71244 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1206 18:40:49.184262   71244 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1206 18:40:49.184276   71244 cache.go:56] Caching tarball of preloaded images
	I1206 18:40:49.184360   71244 preload.go:174] Found /home/jenkins/minikube-integration/17740-63652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1206 18:40:49.184374   71244 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1206 18:40:49.184743   71244 profile.go:148] Saving config to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/addons-463584/config.json ...
	I1206 18:40:49.184767   71244 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/addons-463584/config.json: {Name:mkc15de18921ad75db876c6692f7cfdceb797adf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:40:49.184979   71244 start.go:365] acquiring machines lock for addons-463584: {Name:mk49ce640266d8c664a871ed4989f65c26b6fae1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1206 18:40:49.185044   71244 start.go:369] acquired machines lock for "addons-463584" in 48.58µs
	I1206 18:40:49.185065   71244 start.go:93] Provisioning new machine with config: &{Name:addons-463584 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:addons-463584 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 18:40:49.185183   71244 start.go:125] createHost starting for "" (driver="kvm2")
	I1206 18:40:49.186769   71244 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1206 18:40:49.186927   71244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 18:40:49.186976   71244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 18:40:49.201729   71244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45065
	I1206 18:40:49.202298   71244 main.go:141] libmachine: () Calling .GetVersion
	I1206 18:40:49.202902   71244 main.go:141] libmachine: Using API Version  1
	I1206 18:40:49.202934   71244 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 18:40:49.203376   71244 main.go:141] libmachine: () Calling .GetMachineName
	I1206 18:40:49.203589   71244 main.go:141] libmachine: (addons-463584) Calling .GetMachineName
	I1206 18:40:49.203732   71244 main.go:141] libmachine: (addons-463584) Calling .DriverName
	I1206 18:40:49.203923   71244 start.go:159] libmachine.API.Create for "addons-463584" (driver="kvm2")
	I1206 18:40:49.203969   71244 client.go:168] LocalClient.Create starting
	I1206 18:40:49.204008   71244 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem
	I1206 18:40:49.355491   71244 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem
	I1206 18:40:49.636188   71244 main.go:141] libmachine: Running pre-create checks...
	I1206 18:40:49.636221   71244 main.go:141] libmachine: (addons-463584) Calling .PreCreateCheck
	I1206 18:40:49.636764   71244 main.go:141] libmachine: (addons-463584) Calling .GetConfigRaw
	I1206 18:40:49.637291   71244 main.go:141] libmachine: Creating machine...
	I1206 18:40:49.637307   71244 main.go:141] libmachine: (addons-463584) Calling .Create
	I1206 18:40:49.637507   71244 main.go:141] libmachine: (addons-463584) Creating KVM machine...
	I1206 18:40:49.638790   71244 main.go:141] libmachine: (addons-463584) DBG | found existing default KVM network
	I1206 18:40:49.639500   71244 main.go:141] libmachine: (addons-463584) DBG | I1206 18:40:49.639358   71266 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015350}
	I1206 18:40:49.644800   71244 main.go:141] libmachine: (addons-463584) DBG | trying to create private KVM network mk-addons-463584 192.168.39.0/24...
	I1206 18:40:49.714548   71244 main.go:141] libmachine: (addons-463584) DBG | private KVM network mk-addons-463584 192.168.39.0/24 created
	I1206 18:40:49.714580   71244 main.go:141] libmachine: (addons-463584) DBG | I1206 18:40:49.714501   71266 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17740-63652/.minikube
	I1206 18:40:49.714596   71244 main.go:141] libmachine: (addons-463584) Setting up store path in /home/jenkins/minikube-integration/17740-63652/.minikube/machines/addons-463584 ...
	I1206 18:40:49.714614   71244 main.go:141] libmachine: (addons-463584) Building disk image from file:///home/jenkins/minikube-integration/17740-63652/.minikube/cache/iso/amd64/minikube-v1.32.1-1701387192-17703-amd64.iso
	I1206 18:40:49.714728   71244 main.go:141] libmachine: (addons-463584) Downloading /home/jenkins/minikube-integration/17740-63652/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17740-63652/.minikube/cache/iso/amd64/minikube-v1.32.1-1701387192-17703-amd64.iso...
	I1206 18:40:49.936268   71244 main.go:141] libmachine: (addons-463584) DBG | I1206 18:40:49.936133   71266 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/addons-463584/id_rsa...
	I1206 18:40:50.104292   71244 main.go:141] libmachine: (addons-463584) DBG | I1206 18:40:50.104149   71266 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/addons-463584/addons-463584.rawdisk...
	I1206 18:40:50.104373   71244 main.go:141] libmachine: (addons-463584) DBG | Writing magic tar header
	I1206 18:40:50.104416   71244 main.go:141] libmachine: (addons-463584) DBG | Writing SSH key tar header
	I1206 18:40:50.104431   71244 main.go:141] libmachine: (addons-463584) Setting executable bit set on /home/jenkins/minikube-integration/17740-63652/.minikube/machines/addons-463584 (perms=drwx------)
	I1206 18:40:50.104444   71244 main.go:141] libmachine: (addons-463584) DBG | I1206 18:40:50.104291   71266 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17740-63652/.minikube/machines/addons-463584 ...
	I1206 18:40:50.104473   71244 main.go:141] libmachine: (addons-463584) Setting executable bit set on /home/jenkins/minikube-integration/17740-63652/.minikube/machines (perms=drwxr-xr-x)
	I1206 18:40:50.104489   71244 main.go:141] libmachine: (addons-463584) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/addons-463584
	I1206 18:40:50.104500   71244 main.go:141] libmachine: (addons-463584) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17740-63652/.minikube/machines
	I1206 18:40:50.104511   71244 main.go:141] libmachine: (addons-463584) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17740-63652/.minikube
	I1206 18:40:50.104540   71244 main.go:141] libmachine: (addons-463584) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17740-63652
	I1206 18:40:50.104570   71244 main.go:141] libmachine: (addons-463584) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1206 18:40:50.104589   71244 main.go:141] libmachine: (addons-463584) Setting executable bit set on /home/jenkins/minikube-integration/17740-63652/.minikube (perms=drwxr-xr-x)
	I1206 18:40:50.104608   71244 main.go:141] libmachine: (addons-463584) Setting executable bit set on /home/jenkins/minikube-integration/17740-63652 (perms=drwxrwxr-x)
	I1206 18:40:50.104624   71244 main.go:141] libmachine: (addons-463584) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1206 18:40:50.104638   71244 main.go:141] libmachine: (addons-463584) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1206 18:40:50.104647   71244 main.go:141] libmachine: (addons-463584) Creating domain...
	I1206 18:40:50.104657   71244 main.go:141] libmachine: (addons-463584) DBG | Checking permissions on dir: /home/jenkins
	I1206 18:40:50.104670   71244 main.go:141] libmachine: (addons-463584) DBG | Checking permissions on dir: /home
	I1206 18:40:50.104679   71244 main.go:141] libmachine: (addons-463584) DBG | Skipping /home - not owner
	I1206 18:40:50.105695   71244 main.go:141] libmachine: (addons-463584) define libvirt domain using xml: 
	I1206 18:40:50.105714   71244 main.go:141] libmachine: (addons-463584) <domain type='kvm'>
	I1206 18:40:50.105726   71244 main.go:141] libmachine: (addons-463584)   <name>addons-463584</name>
	I1206 18:40:50.105736   71244 main.go:141] libmachine: (addons-463584)   <memory unit='MiB'>4000</memory>
	I1206 18:40:50.105745   71244 main.go:141] libmachine: (addons-463584)   <vcpu>2</vcpu>
	I1206 18:40:50.105758   71244 main.go:141] libmachine: (addons-463584)   <features>
	I1206 18:40:50.105766   71244 main.go:141] libmachine: (addons-463584)     <acpi/>
	I1206 18:40:50.105773   71244 main.go:141] libmachine: (addons-463584)     <apic/>
	I1206 18:40:50.105784   71244 main.go:141] libmachine: (addons-463584)     <pae/>
	I1206 18:40:50.105789   71244 main.go:141] libmachine: (addons-463584)     
	I1206 18:40:50.105794   71244 main.go:141] libmachine: (addons-463584)   </features>
	I1206 18:40:50.105802   71244 main.go:141] libmachine: (addons-463584)   <cpu mode='host-passthrough'>
	I1206 18:40:50.105808   71244 main.go:141] libmachine: (addons-463584)   
	I1206 18:40:50.105813   71244 main.go:141] libmachine: (addons-463584)   </cpu>
	I1206 18:40:50.105819   71244 main.go:141] libmachine: (addons-463584)   <os>
	I1206 18:40:50.105824   71244 main.go:141] libmachine: (addons-463584)     <type>hvm</type>
	I1206 18:40:50.105868   71244 main.go:141] libmachine: (addons-463584)     <boot dev='cdrom'/>
	I1206 18:40:50.105895   71244 main.go:141] libmachine: (addons-463584)     <boot dev='hd'/>
	I1206 18:40:50.105909   71244 main.go:141] libmachine: (addons-463584)     <bootmenu enable='no'/>
	I1206 18:40:50.105923   71244 main.go:141] libmachine: (addons-463584)   </os>
	I1206 18:40:50.105936   71244 main.go:141] libmachine: (addons-463584)   <devices>
	I1206 18:40:50.105950   71244 main.go:141] libmachine: (addons-463584)     <disk type='file' device='cdrom'>
	I1206 18:40:50.105969   71244 main.go:141] libmachine: (addons-463584)       <source file='/home/jenkins/minikube-integration/17740-63652/.minikube/machines/addons-463584/boot2docker.iso'/>
	I1206 18:40:50.105985   71244 main.go:141] libmachine: (addons-463584)       <target dev='hdc' bus='scsi'/>
	I1206 18:40:50.105998   71244 main.go:141] libmachine: (addons-463584)       <readonly/>
	I1206 18:40:50.106009   71244 main.go:141] libmachine: (addons-463584)     </disk>
	I1206 18:40:50.106020   71244 main.go:141] libmachine: (addons-463584)     <disk type='file' device='disk'>
	I1206 18:40:50.106034   71244 main.go:141] libmachine: (addons-463584)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1206 18:40:50.106054   71244 main.go:141] libmachine: (addons-463584)       <source file='/home/jenkins/minikube-integration/17740-63652/.minikube/machines/addons-463584/addons-463584.rawdisk'/>
	I1206 18:40:50.106071   71244 main.go:141] libmachine: (addons-463584)       <target dev='hda' bus='virtio'/>
	I1206 18:40:50.106083   71244 main.go:141] libmachine: (addons-463584)     </disk>
	I1206 18:40:50.106095   71244 main.go:141] libmachine: (addons-463584)     <interface type='network'>
	I1206 18:40:50.106105   71244 main.go:141] libmachine: (addons-463584)       <source network='mk-addons-463584'/>
	I1206 18:40:50.106117   71244 main.go:141] libmachine: (addons-463584)       <model type='virtio'/>
	I1206 18:40:50.106134   71244 main.go:141] libmachine: (addons-463584)     </interface>
	I1206 18:40:50.106150   71244 main.go:141] libmachine: (addons-463584)     <interface type='network'>
	I1206 18:40:50.106164   71244 main.go:141] libmachine: (addons-463584)       <source network='default'/>
	I1206 18:40:50.106180   71244 main.go:141] libmachine: (addons-463584)       <model type='virtio'/>
	I1206 18:40:50.106191   71244 main.go:141] libmachine: (addons-463584)     </interface>
	I1206 18:40:50.106201   71244 main.go:141] libmachine: (addons-463584)     <serial type='pty'>
	I1206 18:40:50.106229   71244 main.go:141] libmachine: (addons-463584)       <target port='0'/>
	I1206 18:40:50.106251   71244 main.go:141] libmachine: (addons-463584)     </serial>
	I1206 18:40:50.106267   71244 main.go:141] libmachine: (addons-463584)     <console type='pty'>
	I1206 18:40:50.106283   71244 main.go:141] libmachine: (addons-463584)       <target type='serial' port='0'/>
	I1206 18:40:50.106297   71244 main.go:141] libmachine: (addons-463584)     </console>
	I1206 18:40:50.106307   71244 main.go:141] libmachine: (addons-463584)     <rng model='virtio'>
	I1206 18:40:50.106316   71244 main.go:141] libmachine: (addons-463584)       <backend model='random'>/dev/random</backend>
	I1206 18:40:50.106324   71244 main.go:141] libmachine: (addons-463584)     </rng>
	I1206 18:40:50.106332   71244 main.go:141] libmachine: (addons-463584)     
	I1206 18:40:50.106342   71244 main.go:141] libmachine: (addons-463584)     
	I1206 18:40:50.106355   71244 main.go:141] libmachine: (addons-463584)   </devices>
	I1206 18:40:50.106381   71244 main.go:141] libmachine: (addons-463584) </domain>
	I1206 18:40:50.106396   71244 main.go:141] libmachine: (addons-463584) 
	I1206 18:40:50.111124   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined MAC address 52:54:00:26:e0:86 in network default
	I1206 18:40:50.111703   71244 main.go:141] libmachine: (addons-463584) Ensuring networks are active...
	I1206 18:40:50.111732   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:40:50.112468   71244 main.go:141] libmachine: (addons-463584) Ensuring network default is active
	I1206 18:40:50.112751   71244 main.go:141] libmachine: (addons-463584) Ensuring network mk-addons-463584 is active
	I1206 18:40:50.113403   71244 main.go:141] libmachine: (addons-463584) Getting domain xml...
	I1206 18:40:50.114334   71244 main.go:141] libmachine: (addons-463584) Creating domain...
	I1206 18:40:51.334782   71244 main.go:141] libmachine: (addons-463584) Waiting to get IP...
	I1206 18:40:51.335643   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:40:51.336053   71244 main.go:141] libmachine: (addons-463584) DBG | unable to find current IP address of domain addons-463584 in network mk-addons-463584
	I1206 18:40:51.336083   71244 main.go:141] libmachine: (addons-463584) DBG | I1206 18:40:51.336023   71266 retry.go:31] will retry after 275.339825ms: waiting for machine to come up
	I1206 18:40:51.612654   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:40:51.613183   71244 main.go:141] libmachine: (addons-463584) DBG | unable to find current IP address of domain addons-463584 in network mk-addons-463584
	I1206 18:40:51.613216   71244 main.go:141] libmachine: (addons-463584) DBG | I1206 18:40:51.613117   71266 retry.go:31] will retry after 314.7846ms: waiting for machine to come up
	I1206 18:40:51.929794   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:40:51.930224   71244 main.go:141] libmachine: (addons-463584) DBG | unable to find current IP address of domain addons-463584 in network mk-addons-463584
	I1206 18:40:51.930246   71244 main.go:141] libmachine: (addons-463584) DBG | I1206 18:40:51.930167   71266 retry.go:31] will retry after 357.987796ms: waiting for machine to come up
	I1206 18:40:52.289909   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:40:52.290450   71244 main.go:141] libmachine: (addons-463584) DBG | unable to find current IP address of domain addons-463584 in network mk-addons-463584
	I1206 18:40:52.290473   71244 main.go:141] libmachine: (addons-463584) DBG | I1206 18:40:52.290394   71266 retry.go:31] will retry after 487.35607ms: waiting for machine to come up
	I1206 18:40:52.779236   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:40:52.779654   71244 main.go:141] libmachine: (addons-463584) DBG | unable to find current IP address of domain addons-463584 in network mk-addons-463584
	I1206 18:40:52.779689   71244 main.go:141] libmachine: (addons-463584) DBG | I1206 18:40:52.779575   71266 retry.go:31] will retry after 713.269902ms: waiting for machine to come up
	I1206 18:40:53.494476   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:40:53.495000   71244 main.go:141] libmachine: (addons-463584) DBG | unable to find current IP address of domain addons-463584 in network mk-addons-463584
	I1206 18:40:53.495041   71244 main.go:141] libmachine: (addons-463584) DBG | I1206 18:40:53.494902   71266 retry.go:31] will retry after 882.876408ms: waiting for machine to come up
	I1206 18:40:54.379619   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:40:54.380041   71244 main.go:141] libmachine: (addons-463584) DBG | unable to find current IP address of domain addons-463584 in network mk-addons-463584
	I1206 18:40:54.380066   71244 main.go:141] libmachine: (addons-463584) DBG | I1206 18:40:54.379991   71266 retry.go:31] will retry after 1.075225678s: waiting for machine to come up
	I1206 18:40:55.456419   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:40:55.456855   71244 main.go:141] libmachine: (addons-463584) DBG | unable to find current IP address of domain addons-463584 in network mk-addons-463584
	I1206 18:40:55.456901   71244 main.go:141] libmachine: (addons-463584) DBG | I1206 18:40:55.456733   71266 retry.go:31] will retry after 1.347206804s: waiting for machine to come up
	I1206 18:40:56.805374   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:40:56.805865   71244 main.go:141] libmachine: (addons-463584) DBG | unable to find current IP address of domain addons-463584 in network mk-addons-463584
	I1206 18:40:56.805907   71244 main.go:141] libmachine: (addons-463584) DBG | I1206 18:40:56.805817   71266 retry.go:31] will retry after 1.354388618s: waiting for machine to come up
	I1206 18:40:58.162262   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:40:58.162731   71244 main.go:141] libmachine: (addons-463584) DBG | unable to find current IP address of domain addons-463584 in network mk-addons-463584
	I1206 18:40:58.162748   71244 main.go:141] libmachine: (addons-463584) DBG | I1206 18:40:58.162683   71266 retry.go:31] will retry after 2.095720428s: waiting for machine to come up
	I1206 18:41:00.259939   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:00.260386   71244 main.go:141] libmachine: (addons-463584) DBG | unable to find current IP address of domain addons-463584 in network mk-addons-463584
	I1206 18:41:00.260423   71244 main.go:141] libmachine: (addons-463584) DBG | I1206 18:41:00.260361   71266 retry.go:31] will retry after 1.965935835s: waiting for machine to come up
	I1206 18:41:02.228587   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:02.229014   71244 main.go:141] libmachine: (addons-463584) DBG | unable to find current IP address of domain addons-463584 in network mk-addons-463584
	I1206 18:41:02.229043   71244 main.go:141] libmachine: (addons-463584) DBG | I1206 18:41:02.228960   71266 retry.go:31] will retry after 2.649672864s: waiting for machine to come up
	I1206 18:41:04.880401   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:04.880815   71244 main.go:141] libmachine: (addons-463584) DBG | unable to find current IP address of domain addons-463584 in network mk-addons-463584
	I1206 18:41:04.880842   71244 main.go:141] libmachine: (addons-463584) DBG | I1206 18:41:04.880759   71266 retry.go:31] will retry after 4.346319061s: waiting for machine to come up
	I1206 18:41:09.228128   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:09.228478   71244 main.go:141] libmachine: (addons-463584) DBG | unable to find current IP address of domain addons-463584 in network mk-addons-463584
	I1206 18:41:09.228504   71244 main.go:141] libmachine: (addons-463584) DBG | I1206 18:41:09.228432   71266 retry.go:31] will retry after 4.638564005s: waiting for machine to come up
	I1206 18:41:13.872179   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:13.872544   71244 main.go:141] libmachine: (addons-463584) Found IP for machine: 192.168.39.94
	I1206 18:41:13.872574   71244 main.go:141] libmachine: (addons-463584) Reserving static IP address...
	I1206 18:41:13.872591   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has current primary IP address 192.168.39.94 and MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:13.872887   71244 main.go:141] libmachine: (addons-463584) DBG | unable to find host DHCP lease matching {name: "addons-463584", mac: "52:54:00:76:40:00", ip: "192.168.39.94"} in network mk-addons-463584
	I1206 18:41:13.948356   71244 main.go:141] libmachine: (addons-463584) Reserved static IP address: 192.168.39.94
	I1206 18:41:13.948389   71244 main.go:141] libmachine: (addons-463584) Waiting for SSH to be available...
	I1206 18:41:13.948404   71244 main.go:141] libmachine: (addons-463584) DBG | Getting to WaitForSSH function...
	I1206 18:41:13.950725   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:13.951049   71244 main.go:141] libmachine: (addons-463584) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:76:40:00", ip: ""} in network mk-addons-463584
	I1206 18:41:13.951082   71244 main.go:141] libmachine: (addons-463584) DBG | unable to find defined IP address of network mk-addons-463584 interface with MAC address 52:54:00:76:40:00
	I1206 18:41:13.951156   71244 main.go:141] libmachine: (addons-463584) DBG | Using SSH client type: external
	I1206 18:41:13.951176   71244 main.go:141] libmachine: (addons-463584) DBG | Using SSH private key: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/addons-463584/id_rsa (-rw-------)
	I1206 18:41:13.951245   71244 main.go:141] libmachine: (addons-463584) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17740-63652/.minikube/machines/addons-463584/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1206 18:41:13.951267   71244 main.go:141] libmachine: (addons-463584) DBG | About to run SSH command:
	I1206 18:41:13.951282   71244 main.go:141] libmachine: (addons-463584) DBG | exit 0
	I1206 18:41:13.954834   71244 main.go:141] libmachine: (addons-463584) DBG | SSH cmd err, output: exit status 255: 
	I1206 18:41:13.954858   71244 main.go:141] libmachine: (addons-463584) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1206 18:41:13.954866   71244 main.go:141] libmachine: (addons-463584) DBG | command : exit 0
	I1206 18:41:13.954872   71244 main.go:141] libmachine: (addons-463584) DBG | err     : exit status 255
	I1206 18:41:13.954880   71244 main.go:141] libmachine: (addons-463584) DBG | output  : 
	I1206 18:41:16.957634   71244 main.go:141] libmachine: (addons-463584) DBG | Getting to WaitForSSH function...
	I1206 18:41:16.960029   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:16.960494   71244 main.go:141] libmachine: (addons-463584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:40:00", ip: ""} in network mk-addons-463584: {Iface:virbr1 ExpiryTime:2023-12-06 19:41:05 +0000 UTC Type:0 Mac:52:54:00:76:40:00 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:addons-463584 Clientid:01:52:54:00:76:40:00}
	I1206 18:41:16.960527   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined IP address 192.168.39.94 and MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:16.960623   71244 main.go:141] libmachine: (addons-463584) DBG | Using SSH client type: external
	I1206 18:41:16.960652   71244 main.go:141] libmachine: (addons-463584) DBG | Using SSH private key: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/addons-463584/id_rsa (-rw-------)
	I1206 18:41:16.960706   71244 main.go:141] libmachine: (addons-463584) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.94 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17740-63652/.minikube/machines/addons-463584/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1206 18:41:16.960734   71244 main.go:141] libmachine: (addons-463584) DBG | About to run SSH command:
	I1206 18:41:16.960753   71244 main.go:141] libmachine: (addons-463584) DBG | exit 0
	I1206 18:41:17.056943   71244 main.go:141] libmachine: (addons-463584) DBG | SSH cmd err, output: <nil>: 
	I1206 18:41:17.057274   71244 main.go:141] libmachine: (addons-463584) KVM machine creation complete!
	I1206 18:41:17.057689   71244 main.go:141] libmachine: (addons-463584) Calling .GetConfigRaw
	I1206 18:41:17.058238   71244 main.go:141] libmachine: (addons-463584) Calling .DriverName
	I1206 18:41:17.058460   71244 main.go:141] libmachine: (addons-463584) Calling .DriverName
	I1206 18:41:17.058602   71244 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1206 18:41:17.058619   71244 main.go:141] libmachine: (addons-463584) Calling .GetState
	I1206 18:41:17.059794   71244 main.go:141] libmachine: Detecting operating system of created instance...
	I1206 18:41:17.059812   71244 main.go:141] libmachine: Waiting for SSH to be available...
	I1206 18:41:17.059820   71244 main.go:141] libmachine: Getting to WaitForSSH function...
	I1206 18:41:17.059832   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHHostname
	I1206 18:41:17.061750   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:17.062097   71244 main.go:141] libmachine: (addons-463584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:40:00", ip: ""} in network mk-addons-463584: {Iface:virbr1 ExpiryTime:2023-12-06 19:41:05 +0000 UTC Type:0 Mac:52:54:00:76:40:00 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:addons-463584 Clientid:01:52:54:00:76:40:00}
	I1206 18:41:17.062129   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined IP address 192.168.39.94 and MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:17.062243   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHPort
	I1206 18:41:17.062435   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHKeyPath
	I1206 18:41:17.062605   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHKeyPath
	I1206 18:41:17.062750   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHUsername
	I1206 18:41:17.062928   71244 main.go:141] libmachine: Using SSH client type: native
	I1206 18:41:17.063263   71244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I1206 18:41:17.063276   71244 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1206 18:41:17.188359   71244 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1206 18:41:17.188391   71244 main.go:141] libmachine: Detecting the provisioner...
	I1206 18:41:17.188401   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHHostname
	I1206 18:41:17.191052   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:17.191383   71244 main.go:141] libmachine: (addons-463584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:40:00", ip: ""} in network mk-addons-463584: {Iface:virbr1 ExpiryTime:2023-12-06 19:41:05 +0000 UTC Type:0 Mac:52:54:00:76:40:00 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:addons-463584 Clientid:01:52:54:00:76:40:00}
	I1206 18:41:17.191424   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined IP address 192.168.39.94 and MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:17.191575   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHPort
	I1206 18:41:17.191791   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHKeyPath
	I1206 18:41:17.191937   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHKeyPath
	I1206 18:41:17.192062   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHUsername
	I1206 18:41:17.192189   71244 main.go:141] libmachine: Using SSH client type: native
	I1206 18:41:17.192541   71244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I1206 18:41:17.192556   71244 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1206 18:41:17.317987   71244 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gf888a99-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1206 18:41:17.318129   71244 main.go:141] libmachine: found compatible host: buildroot
	I1206 18:41:17.318143   71244 main.go:141] libmachine: Provisioning with buildroot...
	I1206 18:41:17.318151   71244 main.go:141] libmachine: (addons-463584) Calling .GetMachineName
	I1206 18:41:17.318416   71244 buildroot.go:166] provisioning hostname "addons-463584"
	I1206 18:41:17.318445   71244 main.go:141] libmachine: (addons-463584) Calling .GetMachineName
	I1206 18:41:17.318649   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHHostname
	I1206 18:41:17.321361   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:17.321730   71244 main.go:141] libmachine: (addons-463584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:40:00", ip: ""} in network mk-addons-463584: {Iface:virbr1 ExpiryTime:2023-12-06 19:41:05 +0000 UTC Type:0 Mac:52:54:00:76:40:00 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:addons-463584 Clientid:01:52:54:00:76:40:00}
	I1206 18:41:17.321768   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined IP address 192.168.39.94 and MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:17.321878   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHPort
	I1206 18:41:17.322047   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHKeyPath
	I1206 18:41:17.322206   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHKeyPath
	I1206 18:41:17.322369   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHUsername
	I1206 18:41:17.322549   71244 main.go:141] libmachine: Using SSH client type: native
	I1206 18:41:17.322875   71244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I1206 18:41:17.322889   71244 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-463584 && echo "addons-463584" | sudo tee /etc/hostname
	I1206 18:41:17.466734   71244 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-463584
	
	I1206 18:41:17.466763   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHHostname
	I1206 18:41:17.469629   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:17.469980   71244 main.go:141] libmachine: (addons-463584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:40:00", ip: ""} in network mk-addons-463584: {Iface:virbr1 ExpiryTime:2023-12-06 19:41:05 +0000 UTC Type:0 Mac:52:54:00:76:40:00 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:addons-463584 Clientid:01:52:54:00:76:40:00}
	I1206 18:41:17.470033   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined IP address 192.168.39.94 and MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:17.470328   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHPort
	I1206 18:41:17.470512   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHKeyPath
	I1206 18:41:17.470666   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHKeyPath
	I1206 18:41:17.470755   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHUsername
	I1206 18:41:17.470959   71244 main.go:141] libmachine: Using SSH client type: native
	I1206 18:41:17.471294   71244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I1206 18:41:17.471313   71244 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-463584' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-463584/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-463584' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 18:41:17.605852   71244 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1206 18:41:17.605886   71244 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17740-63652/.minikube CaCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17740-63652/.minikube}
	I1206 18:41:17.605913   71244 buildroot.go:174] setting up certificates
	I1206 18:41:17.605925   71244 provision.go:83] configureAuth start
	I1206 18:41:17.605934   71244 main.go:141] libmachine: (addons-463584) Calling .GetMachineName
	I1206 18:41:17.606239   71244 main.go:141] libmachine: (addons-463584) Calling .GetIP
	I1206 18:41:17.608909   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:17.609293   71244 main.go:141] libmachine: (addons-463584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:40:00", ip: ""} in network mk-addons-463584: {Iface:virbr1 ExpiryTime:2023-12-06 19:41:05 +0000 UTC Type:0 Mac:52:54:00:76:40:00 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:addons-463584 Clientid:01:52:54:00:76:40:00}
	I1206 18:41:17.609318   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined IP address 192.168.39.94 and MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:17.609463   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHHostname
	I1206 18:41:17.611691   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:17.612049   71244 main.go:141] libmachine: (addons-463584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:40:00", ip: ""} in network mk-addons-463584: {Iface:virbr1 ExpiryTime:2023-12-06 19:41:05 +0000 UTC Type:0 Mac:52:54:00:76:40:00 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:addons-463584 Clientid:01:52:54:00:76:40:00}
	I1206 18:41:17.612069   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined IP address 192.168.39.94 and MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:17.612174   71244 provision.go:138] copyHostCerts
	I1206 18:41:17.612247   71244 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem (1082 bytes)
	I1206 18:41:17.612440   71244 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem (1123 bytes)
	I1206 18:41:17.612531   71244 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem (1679 bytes)
	I1206 18:41:17.612594   71244 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem org=jenkins.addons-463584 san=[192.168.39.94 192.168.39.94 localhost 127.0.0.1 minikube addons-463584]
	I1206 18:41:17.762308   71244 provision.go:172] copyRemoteCerts
	I1206 18:41:17.762413   71244 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 18:41:17.762457   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHHostname
	I1206 18:41:17.765286   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:17.765726   71244 main.go:141] libmachine: (addons-463584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:40:00", ip: ""} in network mk-addons-463584: {Iface:virbr1 ExpiryTime:2023-12-06 19:41:05 +0000 UTC Type:0 Mac:52:54:00:76:40:00 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:addons-463584 Clientid:01:52:54:00:76:40:00}
	I1206 18:41:17.765747   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined IP address 192.168.39.94 and MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:17.765935   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHPort
	I1206 18:41:17.766136   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHKeyPath
	I1206 18:41:17.766298   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHUsername
	I1206 18:41:17.766448   71244 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/addons-463584/id_rsa Username:docker}
	I1206 18:41:17.858331   71244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 18:41:17.882798   71244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1206 18:41:17.905451   71244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 18:41:17.928157   71244 provision.go:86] duration metric: configureAuth took 322.21608ms
	I1206 18:41:17.928189   71244 buildroot.go:189] setting minikube options for container-runtime
	I1206 18:41:17.928442   71244 config.go:182] Loaded profile config "addons-463584": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 18:41:17.928560   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHHostname
	I1206 18:41:17.931049   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:17.931417   71244 main.go:141] libmachine: (addons-463584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:40:00", ip: ""} in network mk-addons-463584: {Iface:virbr1 ExpiryTime:2023-12-06 19:41:05 +0000 UTC Type:0 Mac:52:54:00:76:40:00 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:addons-463584 Clientid:01:52:54:00:76:40:00}
	I1206 18:41:17.931450   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined IP address 192.168.39.94 and MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:17.931651   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHPort
	I1206 18:41:17.931857   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHKeyPath
	I1206 18:41:17.932006   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHKeyPath
	I1206 18:41:17.932124   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHUsername
	I1206 18:41:17.932281   71244 main.go:141] libmachine: Using SSH client type: native
	I1206 18:41:17.932650   71244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I1206 18:41:17.932668   71244 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 18:41:18.254177   71244 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 18:41:18.254217   71244 main.go:141] libmachine: Checking connection to Docker...
	I1206 18:41:18.254227   71244 main.go:141] libmachine: (addons-463584) Calling .GetURL
	I1206 18:41:18.255406   71244 main.go:141] libmachine: (addons-463584) DBG | Using libvirt version 6000000
	I1206 18:41:18.257638   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:18.258037   71244 main.go:141] libmachine: (addons-463584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:40:00", ip: ""} in network mk-addons-463584: {Iface:virbr1 ExpiryTime:2023-12-06 19:41:05 +0000 UTC Type:0 Mac:52:54:00:76:40:00 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:addons-463584 Clientid:01:52:54:00:76:40:00}
	I1206 18:41:18.258068   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined IP address 192.168.39.94 and MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:18.258269   71244 main.go:141] libmachine: Docker is up and running!
	I1206 18:41:18.258288   71244 main.go:141] libmachine: Reticulating splines...
	I1206 18:41:18.258295   71244 client.go:171] LocalClient.Create took 29.054318359s
	I1206 18:41:18.258319   71244 start.go:167] duration metric: libmachine.API.Create for "addons-463584" took 29.054398131s
	I1206 18:41:18.258335   71244 start.go:300] post-start starting for "addons-463584" (driver="kvm2")
	I1206 18:41:18.258351   71244 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 18:41:18.258367   71244 main.go:141] libmachine: (addons-463584) Calling .DriverName
	I1206 18:41:18.258580   71244 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 18:41:18.258596   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHHostname
	I1206 18:41:18.260911   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:18.261277   71244 main.go:141] libmachine: (addons-463584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:40:00", ip: ""} in network mk-addons-463584: {Iface:virbr1 ExpiryTime:2023-12-06 19:41:05 +0000 UTC Type:0 Mac:52:54:00:76:40:00 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:addons-463584 Clientid:01:52:54:00:76:40:00}
	I1206 18:41:18.261313   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined IP address 192.168.39.94 and MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:18.261442   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHPort
	I1206 18:41:18.261630   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHKeyPath
	I1206 18:41:18.261781   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHUsername
	I1206 18:41:18.261997   71244 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/addons-463584/id_rsa Username:docker}
	I1206 18:41:18.354454   71244 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 18:41:18.358519   71244 info.go:137] Remote host: Buildroot 2021.02.12
	I1206 18:41:18.358547   71244 filesync.go:126] Scanning /home/jenkins/minikube-integration/17740-63652/.minikube/addons for local assets ...
	I1206 18:41:18.358618   71244 filesync.go:126] Scanning /home/jenkins/minikube-integration/17740-63652/.minikube/files for local assets ...
	I1206 18:41:18.358650   71244 start.go:303] post-start completed in 100.306395ms
	I1206 18:41:18.358689   71244 main.go:141] libmachine: (addons-463584) Calling .GetConfigRaw
	I1206 18:41:18.359293   71244 main.go:141] libmachine: (addons-463584) Calling .GetIP
	I1206 18:41:18.361691   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:18.362065   71244 main.go:141] libmachine: (addons-463584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:40:00", ip: ""} in network mk-addons-463584: {Iface:virbr1 ExpiryTime:2023-12-06 19:41:05 +0000 UTC Type:0 Mac:52:54:00:76:40:00 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:addons-463584 Clientid:01:52:54:00:76:40:00}
	I1206 18:41:18.362094   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined IP address 192.168.39.94 and MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:18.362330   71244 profile.go:148] Saving config to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/addons-463584/config.json ...
	I1206 18:41:18.362504   71244 start.go:128] duration metric: createHost completed in 29.177308544s
	I1206 18:41:18.362525   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHHostname
	I1206 18:41:18.364688   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:18.364973   71244 main.go:141] libmachine: (addons-463584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:40:00", ip: ""} in network mk-addons-463584: {Iface:virbr1 ExpiryTime:2023-12-06 19:41:05 +0000 UTC Type:0 Mac:52:54:00:76:40:00 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:addons-463584 Clientid:01:52:54:00:76:40:00}
	I1206 18:41:18.365003   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined IP address 192.168.39.94 and MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:18.365157   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHPort
	I1206 18:41:18.365335   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHKeyPath
	I1206 18:41:18.365468   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHKeyPath
	I1206 18:41:18.365579   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHUsername
	I1206 18:41:18.365714   71244 main.go:141] libmachine: Using SSH client type: native
	I1206 18:41:18.366071   71244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I1206 18:41:18.366086   71244 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1206 18:41:18.494003   71244 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701888078.481876972
	
	I1206 18:41:18.494036   71244 fix.go:206] guest clock: 1701888078.481876972
	I1206 18:41:18.494043   71244 fix.go:219] Guest: 2023-12-06 18:41:18.481876972 +0000 UTC Remote: 2023-12-06 18:41:18.362515032 +0000 UTC m=+29.295971278 (delta=119.36194ms)
	I1206 18:41:18.494063   71244 fix.go:190] guest clock delta is within tolerance: 119.36194ms
	I1206 18:41:18.494068   71244 start.go:83] releasing machines lock for "addons-463584", held for 29.309012842s
	I1206 18:41:18.494086   71244 main.go:141] libmachine: (addons-463584) Calling .DriverName
	I1206 18:41:18.494395   71244 main.go:141] libmachine: (addons-463584) Calling .GetIP
	I1206 18:41:18.496919   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:18.497306   71244 main.go:141] libmachine: (addons-463584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:40:00", ip: ""} in network mk-addons-463584: {Iface:virbr1 ExpiryTime:2023-12-06 19:41:05 +0000 UTC Type:0 Mac:52:54:00:76:40:00 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:addons-463584 Clientid:01:52:54:00:76:40:00}
	I1206 18:41:18.497336   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined IP address 192.168.39.94 and MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:18.497508   71244 main.go:141] libmachine: (addons-463584) Calling .DriverName
	I1206 18:41:18.497996   71244 main.go:141] libmachine: (addons-463584) Calling .DriverName
	I1206 18:41:18.498174   71244 main.go:141] libmachine: (addons-463584) Calling .DriverName
	I1206 18:41:18.498311   71244 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 18:41:18.498354   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHHostname
	I1206 18:41:18.498426   71244 ssh_runner.go:195] Run: cat /version.json
	I1206 18:41:18.498449   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHHostname
	I1206 18:41:18.501079   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:18.501292   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:18.501449   71244 main.go:141] libmachine: (addons-463584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:40:00", ip: ""} in network mk-addons-463584: {Iface:virbr1 ExpiryTime:2023-12-06 19:41:05 +0000 UTC Type:0 Mac:52:54:00:76:40:00 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:addons-463584 Clientid:01:52:54:00:76:40:00}
	I1206 18:41:18.501476   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined IP address 192.168.39.94 and MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:18.501598   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHPort
	I1206 18:41:18.501766   71244 main.go:141] libmachine: (addons-463584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:40:00", ip: ""} in network mk-addons-463584: {Iface:virbr1 ExpiryTime:2023-12-06 19:41:05 +0000 UTC Type:0 Mac:52:54:00:76:40:00 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:addons-463584 Clientid:01:52:54:00:76:40:00}
	I1206 18:41:18.501782   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHKeyPath
	I1206 18:41:18.501788   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined IP address 192.168.39.94 and MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:18.501946   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHUsername
	I1206 18:41:18.502006   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHPort
	I1206 18:41:18.502099   71244 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/addons-463584/id_rsa Username:docker}
	I1206 18:41:18.502189   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHKeyPath
	I1206 18:41:18.502325   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHUsername
	I1206 18:41:18.502448   71244 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/addons-463584/id_rsa Username:docker}
	I1206 18:41:18.590975   71244 ssh_runner.go:195] Run: systemctl --version
	I1206 18:41:18.622307   71244 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 18:41:18.782304   71244 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 18:41:18.788046   71244 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 18:41:18.788156   71244 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 18:41:18.804543   71244 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1206 18:41:18.804574   71244 start.go:475] detecting cgroup driver to use...
	I1206 18:41:18.804658   71244 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 18:41:18.817915   71244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 18:41:18.831012   71244 docker.go:203] disabling cri-docker service (if available) ...
	I1206 18:41:18.831069   71244 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 18:41:18.844704   71244 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 18:41:18.858823   71244 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 18:41:18.968935   71244 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 18:41:19.083087   71244 docker.go:219] disabling docker service ...
	I1206 18:41:19.083171   71244 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 18:41:19.097091   71244 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 18:41:19.109735   71244 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 18:41:19.212687   71244 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 18:41:19.314272   71244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 18:41:19.327637   71244 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 18:41:19.344623   71244 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1206 18:41:19.344692   71244 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 18:41:19.354977   71244 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 18:41:19.355050   71244 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 18:41:19.365259   71244 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 18:41:19.375239   71244 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 18:41:19.385181   71244 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 18:41:19.395352   71244 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 18:41:19.404357   71244 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1206 18:41:19.404412   71244 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1206 18:41:19.418394   71244 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 18:41:19.427701   71244 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 18:41:19.533986   71244 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 18:41:19.705467   71244 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 18:41:19.705565   71244 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 18:41:19.710148   71244 start.go:543] Will wait 60s for crictl version
	I1206 18:41:19.710247   71244 ssh_runner.go:195] Run: which crictl
	I1206 18:41:19.713708   71244 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1206 18:41:19.751437   71244 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1206 18:41:19.751557   71244 ssh_runner.go:195] Run: crio --version
	I1206 18:41:19.798871   71244 ssh_runner.go:195] Run: crio --version
	I1206 18:41:19.842457   71244 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1206 18:41:19.843807   71244 main.go:141] libmachine: (addons-463584) Calling .GetIP
	I1206 18:41:19.846605   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:19.846991   71244 main.go:141] libmachine: (addons-463584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:40:00", ip: ""} in network mk-addons-463584: {Iface:virbr1 ExpiryTime:2023-12-06 19:41:05 +0000 UTC Type:0 Mac:52:54:00:76:40:00 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:addons-463584 Clientid:01:52:54:00:76:40:00}
	I1206 18:41:19.847018   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined IP address 192.168.39.94 and MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:19.847294   71244 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1206 18:41:19.851387   71244 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 18:41:19.863964   71244 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1206 18:41:19.864053   71244 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 18:41:19.900216   71244 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1206 18:41:19.900283   71244 ssh_runner.go:195] Run: which lz4
	I1206 18:41:19.904376   71244 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1206 18:41:19.908556   71244 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1206 18:41:19.908589   71244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1206 18:41:21.708631   71244 crio.go:444] Took 1.804287 seconds to copy over tarball
	I1206 18:41:21.708722   71244 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1206 18:41:24.721318   71244 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.012556259s)
	I1206 18:41:24.721362   71244 crio.go:451] Took 3.012698 seconds to extract the tarball
	I1206 18:41:24.721375   71244 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1206 18:41:24.762577   71244 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 18:41:24.826378   71244 crio.go:496] all images are preloaded for cri-o runtime.
	I1206 18:41:24.826405   71244 cache_images.go:84] Images are preloaded, skipping loading
	I1206 18:41:24.826484   71244 ssh_runner.go:195] Run: crio config
	I1206 18:41:24.900190   71244 cni.go:84] Creating CNI manager for ""
	I1206 18:41:24.900244   71244 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 18:41:24.900264   71244 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1206 18:41:24.900322   71244 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.94 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-463584 NodeName:addons-463584 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.94"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.94 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 18:41:24.900486   71244 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.94
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-463584"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.94
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.94"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 18:41:24.900578   71244 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=addons-463584 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.94
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-463584 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1206 18:41:24.900656   71244 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1206 18:41:24.913765   71244 binaries.go:44] Found k8s binaries, skipping transfer
	I1206 18:41:24.913864   71244 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 18:41:24.923895   71244 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1206 18:41:24.941929   71244 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 18:41:24.958205   71244 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I1206 18:41:24.974820   71244 ssh_runner.go:195] Run: grep 192.168.39.94	control-plane.minikube.internal$ /etc/hosts
	I1206 18:41:24.978585   71244 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.94	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 18:41:24.990961   71244 certs.go:56] Setting up /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/addons-463584 for IP: 192.168.39.94
	I1206 18:41:24.991034   71244 certs.go:190] acquiring lock for shared ca certs: {Name:mkf8fbf7b590617ef4dc6c3a4acb742ae26f89ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:41:24.991209   71244 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.key
	I1206 18:41:25.104018   71244 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt ...
	I1206 18:41:25.104052   71244 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt: {Name:mk837b78c88743b97ef5f04bc69be47d08079dca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:41:25.104231   71244 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17740-63652/.minikube/ca.key ...
	I1206 18:41:25.104242   71244 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-63652/.minikube/ca.key: {Name:mke927e6b71c42b103f771f3203712d7738638e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:41:25.104316   71244 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.key
	I1206 18:41:25.244853   71244 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.crt ...
	I1206 18:41:25.244885   71244 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.crt: {Name:mkec4c040c7d9dd392946de837264352a0a23498 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:41:25.245056   71244 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.key ...
	I1206 18:41:25.245066   71244 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.key: {Name:mka0186d4dd6c04193c46b743db7f4d77f814b37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:41:25.245168   71244 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/addons-463584/client.key
	I1206 18:41:25.245183   71244 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/addons-463584/client.crt with IP's: []
	I1206 18:41:25.484084   71244 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/addons-463584/client.crt ...
	I1206 18:41:25.484123   71244 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/addons-463584/client.crt: {Name:mkcf99f4ef243eb3ec09fc3f62266752f9196673 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:41:25.484322   71244 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/addons-463584/client.key ...
	I1206 18:41:25.484339   71244 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/addons-463584/client.key: {Name:mk5c692c7844327ebf04bf0a73ac9037923a4712 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:41:25.484435   71244 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/addons-463584/apiserver.key.e433db91
	I1206 18:41:25.484459   71244 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/addons-463584/apiserver.crt.e433db91 with IP's: [192.168.39.94 10.96.0.1 127.0.0.1 10.0.0.1]
	I1206 18:41:25.541970   71244 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/addons-463584/apiserver.crt.e433db91 ...
	I1206 18:41:25.541999   71244 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/addons-463584/apiserver.crt.e433db91: {Name:mk8e7aa0971b7ea3e7f363a92c098d12eb1a5d1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:41:25.542183   71244 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/addons-463584/apiserver.key.e433db91 ...
	I1206 18:41:25.542205   71244 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/addons-463584/apiserver.key.e433db91: {Name:mk175b73ff0732a96c6a1919543885afc21544e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:41:25.542335   71244 certs.go:337] copying /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/addons-463584/apiserver.crt.e433db91 -> /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/addons-463584/apiserver.crt
	I1206 18:41:25.542438   71244 certs.go:341] copying /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/addons-463584/apiserver.key.e433db91 -> /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/addons-463584/apiserver.key
	I1206 18:41:25.542511   71244 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/addons-463584/proxy-client.key
	I1206 18:41:25.542533   71244 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/addons-463584/proxy-client.crt with IP's: []
	I1206 18:41:25.761158   71244 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/addons-463584/proxy-client.crt ...
	I1206 18:41:25.761193   71244 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/addons-463584/proxy-client.crt: {Name:mk4c0800d8aecc9a50cdd70b33289f18ea05d78a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:41:25.761388   71244 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/addons-463584/proxy-client.key ...
	I1206 18:41:25.761408   71244 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/addons-463584/proxy-client.key: {Name:mk51feebec4dc3b6de0305c4273e8181cd3715a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:41:25.761630   71244 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 18:41:25.761680   71244 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem (1082 bytes)
	I1206 18:41:25.761717   71244 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem (1123 bytes)
	I1206 18:41:25.761748   71244 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem (1679 bytes)
	I1206 18:41:25.762384   71244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/addons-463584/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1206 18:41:25.786894   71244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/addons-463584/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 18:41:25.810031   71244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/addons-463584/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 18:41:25.833866   71244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/addons-463584/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 18:41:25.857259   71244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 18:41:25.881052   71244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 18:41:25.904784   71244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 18:41:25.928011   71244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 18:41:25.951563   71244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 18:41:25.974445   71244 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 18:41:25.990797   71244 ssh_runner.go:195] Run: openssl version
	I1206 18:41:25.996382   71244 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1206 18:41:26.006414   71244 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 18:41:26.011165   71244 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  6 18:41 /usr/share/ca-certificates/minikubeCA.pem
	I1206 18:41:26.011219   71244 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 18:41:26.016792   71244 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1206 18:41:26.026872   71244 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1206 18:41:26.031363   71244 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1206 18:41:26.031434   71244 kubeadm.go:404] StartCluster: {Name:addons-463584 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:addons-463584 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.94 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 18:41:26.031520   71244 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 18:41:26.031567   71244 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 18:41:26.072347   71244 cri.go:89] found id: ""
	I1206 18:41:26.072419   71244 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 18:41:26.081569   71244 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 18:41:26.090178   71244 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 18:41:26.098808   71244 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 18:41:26.098855   71244 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1206 18:41:26.151011   71244 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1206 18:41:26.151125   71244 kubeadm.go:322] [preflight] Running pre-flight checks
	I1206 18:41:26.292915   71244 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 18:41:26.293096   71244 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 18:41:26.293190   71244 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1206 18:41:26.535169   71244 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 18:41:26.669748   71244 out.go:204]   - Generating certificates and keys ...
	I1206 18:41:26.669929   71244 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1206 18:41:26.670017   71244 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1206 18:41:26.688662   71244 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 18:41:26.892456   71244 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1206 18:41:27.002022   71244 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1206 18:41:27.217465   71244 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1206 18:41:27.343671   71244 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1206 18:41:27.344037   71244 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-463584 localhost] and IPs [192.168.39.94 127.0.0.1 ::1]
	I1206 18:41:27.562170   71244 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1206 18:41:27.562434   71244 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-463584 localhost] and IPs [192.168.39.94 127.0.0.1 ::1]
	I1206 18:41:27.720484   71244 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 18:41:27.977423   71244 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 18:41:28.205370   71244 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1206 18:41:28.205828   71244 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 18:41:28.326086   71244 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 18:41:28.490988   71244 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 18:41:28.608507   71244 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 18:41:28.679111   71244 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 18:41:28.679778   71244 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 18:41:28.684172   71244 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 18:41:28.686201   71244 out.go:204]   - Booting up control plane ...
	I1206 18:41:28.686373   71244 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 18:41:28.686475   71244 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 18:41:28.686591   71244 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 18:41:28.701302   71244 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 18:41:28.702168   71244 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 18:41:28.702271   71244 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1206 18:41:28.820615   71244 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1206 18:41:36.322695   71244 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.502966 seconds
	I1206 18:41:36.322871   71244 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 18:41:36.351851   71244 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 18:41:36.882696   71244 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 18:41:36.882966   71244 kubeadm.go:322] [mark-control-plane] Marking the node addons-463584 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1206 18:41:37.396797   71244 kubeadm.go:322] [bootstrap-token] Using token: bdnjm4.sel486pnwvl21oi3
	I1206 18:41:37.399455   71244 out.go:204]   - Configuring RBAC rules ...
	I1206 18:41:37.399580   71244 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 18:41:37.403825   71244 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1206 18:41:37.416205   71244 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 18:41:37.420103   71244 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 18:41:37.424612   71244 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 18:41:37.428787   71244 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 18:41:37.443792   71244 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1206 18:41:37.670933   71244 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1206 18:41:37.833215   71244 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1206 18:41:37.834196   71244 kubeadm.go:322] 
	I1206 18:41:37.834262   71244 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1206 18:41:37.834277   71244 kubeadm.go:322] 
	I1206 18:41:37.834371   71244 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1206 18:41:37.834383   71244 kubeadm.go:322] 
	I1206 18:41:37.834414   71244 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1206 18:41:37.834496   71244 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 18:41:37.834575   71244 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 18:41:37.834587   71244 kubeadm.go:322] 
	I1206 18:41:37.834660   71244 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1206 18:41:37.834696   71244 kubeadm.go:322] 
	I1206 18:41:37.834785   71244 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1206 18:41:37.834798   71244 kubeadm.go:322] 
	I1206 18:41:37.834857   71244 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1206 18:41:37.834950   71244 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 18:41:37.835055   71244 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 18:41:37.835068   71244 kubeadm.go:322] 
	I1206 18:41:37.835175   71244 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1206 18:41:37.835285   71244 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1206 18:41:37.835301   71244 kubeadm.go:322] 
	I1206 18:41:37.835405   71244 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token bdnjm4.sel486pnwvl21oi3 \
	I1206 18:41:37.835528   71244 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3173d0205a58c67077c3594ee458dc14bc41fcece32682cbfd9ea1126e12b817 \
	I1206 18:41:37.835558   71244 kubeadm.go:322] 	--control-plane 
	I1206 18:41:37.835567   71244 kubeadm.go:322] 
	I1206 18:41:37.835662   71244 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1206 18:41:37.835674   71244 kubeadm.go:322] 
	I1206 18:41:37.835785   71244 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token bdnjm4.sel486pnwvl21oi3 \
	I1206 18:41:37.835935   71244 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3173d0205a58c67077c3594ee458dc14bc41fcece32682cbfd9ea1126e12b817 
	I1206 18:41:37.836097   71244 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 18:41:37.836144   71244 cni.go:84] Creating CNI manager for ""
	I1206 18:41:37.836166   71244 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 18:41:37.838205   71244 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1206 18:41:37.839729   71244 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1206 18:41:37.896104   71244 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1206 18:41:37.965737   71244 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 18:41:37.965822   71244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:41:37.965868   71244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=31a3600ce72029d920a55140bbc6d0705e357503 minikube.k8s.io/name=addons-463584 minikube.k8s.io/updated_at=2023_12_06T18_41_37_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:41:38.158603   71244 ops.go:34] apiserver oom_adj: -16
	I1206 18:41:38.158721   71244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:41:38.268980   71244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:41:38.874761   71244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:41:39.374689   71244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:41:39.874808   71244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:41:40.374930   71244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:41:40.875024   71244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:41:41.374321   71244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:41:41.874285   71244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:41:42.374221   71244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:41:42.874598   71244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:41:43.374443   71244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:41:43.874103   71244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:41:44.374505   71244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:41:44.874382   71244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:41:45.374174   71244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:41:45.874519   71244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:41:46.375053   71244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:41:46.874994   71244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:41:47.374484   71244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:41:47.875073   71244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:41:48.375130   71244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:41:48.874510   71244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:41:49.374380   71244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:41:49.874656   71244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:41:50.000327   71244 kubeadm.go:1088] duration metric: took 12.034571741s to wait for elevateKubeSystemPrivileges.
	I1206 18:41:50.000395   71244 kubeadm.go:406] StartCluster complete in 23.968967948s
	I1206 18:41:50.000433   71244 settings.go:142] acquiring lock: {Name:mkfeb988d43ca5824ac2b3af603600358ae0dd6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:41:50.000580   71244 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17740-63652/kubeconfig
	I1206 18:41:50.001096   71244 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-63652/kubeconfig: {Name:mkb891a2b2c86b4a1b0f4bb8fd4e51233eb9c683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:41:50.001346   71244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 18:41:50.001420   71244 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1206 18:41:50.001522   71244 addons.go:69] Setting volumesnapshots=true in profile "addons-463584"
	I1206 18:41:50.001557   71244 addons.go:69] Setting ingress-dns=true in profile "addons-463584"
	I1206 18:41:50.001581   71244 addons.go:69] Setting inspektor-gadget=true in profile "addons-463584"
	I1206 18:41:50.001594   71244 addons.go:231] Setting addon ingress-dns=true in "addons-463584"
	I1206 18:41:50.001603   71244 addons.go:231] Setting addon inspektor-gadget=true in "addons-463584"
	I1206 18:41:50.001610   71244 addons.go:231] Setting addon volumesnapshots=true in "addons-463584"
	I1206 18:41:50.001549   71244 addons.go:69] Setting cloud-spanner=true in profile "addons-463584"
	I1206 18:41:50.001656   71244 host.go:66] Checking if "addons-463584" exists ...
	I1206 18:41:50.001657   71244 config.go:182] Loaded profile config "addons-463584": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 18:41:50.001668   71244 host.go:66] Checking if "addons-463584" exists ...
	I1206 18:41:50.001581   71244 addons.go:69] Setting storage-provisioner=true in profile "addons-463584"
	I1206 18:41:50.001680   71244 addons.go:231] Setting addon storage-provisioner=true in "addons-463584"
	I1206 18:41:50.001668   71244 addons.go:231] Setting addon cloud-spanner=true in "addons-463584"
	I1206 18:41:50.001715   71244 host.go:66] Checking if "addons-463584" exists ...
	I1206 18:41:50.001523   71244 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-463584"
	I1206 18:41:50.001733   71244 host.go:66] Checking if "addons-463584" exists ...
	I1206 18:41:50.001762   71244 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-463584"
	I1206 18:41:50.001809   71244 host.go:66] Checking if "addons-463584" exists ...
	I1206 18:41:50.001552   71244 addons.go:69] Setting gcp-auth=true in profile "addons-463584"
	I1206 18:41:50.001886   71244 mustload.go:65] Loading cluster: addons-463584
	I1206 18:41:50.002082   71244 config.go:182] Loaded profile config "addons-463584": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 18:41:50.001535   71244 addons.go:69] Setting helm-tiller=true in profile "addons-463584"
	I1206 18:41:50.002141   71244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 18:41:50.002153   71244 addons.go:231] Setting addon helm-tiller=true in "addons-463584"
	I1206 18:41:50.002154   71244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 18:41:50.002157   71244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 18:41:50.002171   71244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 18:41:50.002180   71244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 18:41:50.002186   71244 host.go:66] Checking if "addons-463584" exists ...
	I1206 18:41:50.002188   71244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 18:41:50.002225   71244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 18:41:50.002246   71244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 18:41:50.001549   71244 addons.go:69] Setting ingress=true in profile "addons-463584"
	I1206 18:41:50.002266   71244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 18:41:50.002282   71244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 18:41:50.002268   71244 addons.go:231] Setting addon ingress=true in "addons-463584"
	I1206 18:41:50.001587   71244 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-463584"
	I1206 18:41:50.001563   71244 addons.go:69] Setting registry=true in profile "addons-463584"
	I1206 18:41:50.002331   71244 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-463584"
	I1206 18:41:50.001656   71244 host.go:66] Checking if "addons-463584" exists ...
	I1206 18:41:50.002340   71244 addons.go:231] Setting addon registry=true in "addons-463584"
	I1206 18:41:50.001537   71244 addons.go:69] Setting metrics-server=true in profile "addons-463584"
	I1206 18:41:50.002356   71244 addons.go:231] Setting addon metrics-server=true in "addons-463584"
	I1206 18:41:50.001572   71244 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-463584"
	I1206 18:41:50.002374   71244 addons.go:231] Setting addon nvidia-device-plugin=true in "addons-463584"
	I1206 18:41:50.001553   71244 addons.go:69] Setting default-storageclass=true in profile "addons-463584"
	I1206 18:41:50.002414   71244 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-463584"
	I1206 18:41:50.002487   71244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 18:41:50.002512   71244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 18:41:50.002642   71244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 18:41:50.002669   71244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 18:41:50.002677   71244 host.go:66] Checking if "addons-463584" exists ...
	I1206 18:41:50.002730   71244 host.go:66] Checking if "addons-463584" exists ...
	I1206 18:41:50.002986   71244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 18:41:50.002993   71244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 18:41:50.003002   71244 host.go:66] Checking if "addons-463584" exists ...
	I1206 18:41:50.003009   71244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 18:41:50.003045   71244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 18:41:50.003010   71244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 18:41:50.003067   71244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 18:41:50.003118   71244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 18:41:50.003132   71244 host.go:66] Checking if "addons-463584" exists ...
	I1206 18:41:50.003144   71244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 18:41:50.003378   71244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 18:41:50.003392   71244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 18:41:50.003415   71244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 18:41:50.003549   71244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 18:41:50.020743   71244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44295
	I1206 18:41:50.021008   71244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46791
	I1206 18:41:50.021117   71244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41017
	I1206 18:41:50.021171   71244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39491
	I1206 18:41:50.021492   71244 main.go:141] libmachine: () Calling .GetVersion
	I1206 18:41:50.021534   71244 main.go:141] libmachine: () Calling .GetVersion
	I1206 18:41:50.021595   71244 main.go:141] libmachine: () Calling .GetVersion
	I1206 18:41:50.021990   71244 main.go:141] libmachine: Using API Version  1
	I1206 18:41:50.022014   71244 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 18:41:50.022226   71244 main.go:141] libmachine: Using API Version  1
	I1206 18:41:50.022250   71244 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 18:41:50.022461   71244 main.go:141] libmachine: Using API Version  1
	I1206 18:41:50.022484   71244 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 18:41:50.022585   71244 main.go:141] libmachine: () Calling .GetMachineName
	I1206 18:41:50.022585   71244 main.go:141] libmachine: () Calling .GetMachineName
	I1206 18:41:50.022621   71244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37645
	I1206 18:41:50.022977   71244 main.go:141] libmachine: () Calling .GetVersion
	I1206 18:41:50.022990   71244 main.go:141] libmachine: () Calling .GetMachineName
	I1206 18:41:50.023043   71244 main.go:141] libmachine: () Calling .GetVersion
	I1206 18:41:50.023104   71244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 18:41:50.023159   71244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 18:41:50.023277   71244 main.go:141] libmachine: (addons-463584) Calling .GetState
	I1206 18:41:50.023286   71244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 18:41:50.023311   71244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 18:41:50.023485   71244 main.go:141] libmachine: Using API Version  1
	I1206 18:41:50.023509   71244 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 18:41:50.023831   71244 main.go:141] libmachine: Using API Version  1
	I1206 18:41:50.023874   71244 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 18:41:50.023952   71244 main.go:141] libmachine: () Calling .GetMachineName
	I1206 18:41:50.024199   71244 main.go:141] libmachine: () Calling .GetMachineName
	I1206 18:41:50.025219   71244 host.go:66] Checking if "addons-463584" exists ...
	I1206 18:41:50.029706   71244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 18:41:50.029755   71244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 18:41:50.029781   71244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 18:41:50.029834   71244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 18:41:50.029889   71244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 18:41:50.029927   71244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 18:41:50.029724   71244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 18:41:50.030199   71244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 18:41:50.039368   71244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36151
	I1206 18:41:50.045795   71244 main.go:141] libmachine: () Calling .GetVersion
	I1206 18:41:50.046676   71244 main.go:141] libmachine: Using API Version  1
	I1206 18:41:50.046705   71244 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 18:41:50.047323   71244 main.go:141] libmachine: () Calling .GetMachineName
	I1206 18:41:50.047921   71244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 18:41:50.048174   71244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 18:41:50.056771   71244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37237
	I1206 18:41:50.057309   71244 main.go:141] libmachine: () Calling .GetVersion
	I1206 18:41:50.057779   71244 main.go:141] libmachine: Using API Version  1
	I1206 18:41:50.057807   71244 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 18:41:50.058192   71244 main.go:141] libmachine: () Calling .GetMachineName
	I1206 18:41:50.058747   71244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 18:41:50.058801   71244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 18:41:50.059015   71244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43129
	I1206 18:41:50.059373   71244 main.go:141] libmachine: () Calling .GetVersion
	I1206 18:41:50.059852   71244 main.go:141] libmachine: Using API Version  1
	I1206 18:41:50.059880   71244 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 18:41:50.060266   71244 main.go:141] libmachine: () Calling .GetMachineName
	I1206 18:41:50.060488   71244 main.go:141] libmachine: (addons-463584) Calling .GetState
	I1206 18:41:50.062775   71244 main.go:141] libmachine: (addons-463584) Calling .DriverName
	I1206 18:41:50.065019   71244 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1206 18:41:50.063400   71244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42193
	I1206 18:41:50.066588   71244 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1206 18:41:50.066603   71244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1206 18:41:50.066624   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHHostname
	I1206 18:41:50.069350   71244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46257
	I1206 18:41:50.069806   71244 main.go:141] libmachine: () Calling .GetVersion
	I1206 18:41:50.070581   71244 main.go:141] libmachine: Using API Version  1
	I1206 18:41:50.070600   71244 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 18:41:50.071481   71244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43267
	I1206 18:41:50.071604   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:50.071641   71244 main.go:141] libmachine: () Calling .GetMachineName
	I1206 18:41:50.071947   71244 main.go:141] libmachine: () Calling .GetVersion
	I1206 18:41:50.072493   71244 main.go:141] libmachine: Using API Version  1
	I1206 18:41:50.072511   71244 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 18:41:50.072575   71244 main.go:141] libmachine: (addons-463584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:40:00", ip: ""} in network mk-addons-463584: {Iface:virbr1 ExpiryTime:2023-12-06 19:41:05 +0000 UTC Type:0 Mac:52:54:00:76:40:00 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:addons-463584 Clientid:01:52:54:00:76:40:00}
	I1206 18:41:50.072590   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined IP address 192.168.39.94 and MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:50.073099   71244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 18:41:50.073125   71244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 18:41:50.073330   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHPort
	I1206 18:41:50.073423   71244 main.go:141] libmachine: () Calling .GetVersion
	I1206 18:41:50.073483   71244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40629
	I1206 18:41:50.074158   71244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38083
	I1206 18:41:50.074211   71244 main.go:141] libmachine: Using API Version  1
	I1206 18:41:50.074229   71244 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 18:41:50.074284   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHKeyPath
	I1206 18:41:50.074508   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHUsername
	I1206 18:41:50.074633   71244 main.go:141] libmachine: () Calling .GetMachineName
	I1206 18:41:50.074697   71244 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/addons-463584/id_rsa Username:docker}
	I1206 18:41:50.075214   71244 main.go:141] libmachine: () Calling .GetVersion
	I1206 18:41:50.075850   71244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 18:41:50.075890   71244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 18:41:50.076185   71244 main.go:141] libmachine: Using API Version  1
	I1206 18:41:50.076203   71244 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 18:41:50.076821   71244 main.go:141] libmachine: () Calling .GetMachineName
	I1206 18:41:50.076889   71244 main.go:141] libmachine: () Calling .GetVersion
	I1206 18:41:50.077377   71244 main.go:141] libmachine: () Calling .GetMachineName
	I1206 18:41:50.077426   71244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 18:41:50.077461   71244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 18:41:50.077517   71244 main.go:141] libmachine: Using API Version  1
	I1206 18:41:50.077530   71244 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 18:41:50.077906   71244 main.go:141] libmachine: () Calling .GetMachineName
	I1206 18:41:50.078112   71244 main.go:141] libmachine: (addons-463584) Calling .GetState
	I1206 18:41:50.078717   71244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46419
	I1206 18:41:50.079030   71244 main.go:141] libmachine: (addons-463584) Calling .GetState
	I1206 18:41:50.081429   71244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39979
	I1206 18:41:50.081547   71244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37261
	I1206 18:41:50.081588   71244 main.go:141] libmachine: () Calling .GetVersion
	I1206 18:41:50.081633   71244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37521
	I1206 18:41:50.082243   71244 main.go:141] libmachine: Using API Version  1
	I1206 18:41:50.082262   71244 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 18:41:50.082331   71244 main.go:141] libmachine: () Calling .GetVersion
	I1206 18:41:50.083181   71244 main.go:141] libmachine: Using API Version  1
	I1206 18:41:50.083200   71244 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 18:41:50.083260   71244 main.go:141] libmachine: () Calling .GetMachineName
	I1206 18:41:50.083465   71244 main.go:141] libmachine: (addons-463584) Calling .GetState
	I1206 18:41:50.083730   71244 main.go:141] libmachine: () Calling .GetMachineName
	I1206 18:41:50.084024   71244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46337
	I1206 18:41:50.084482   71244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 18:41:50.084514   71244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 18:41:50.084961   71244 main.go:141] libmachine: () Calling .GetVersion
	I1206 18:41:50.085419   71244 main.go:141] libmachine: () Calling .GetVersion
	I1206 18:41:50.085490   71244 main.go:141] libmachine: Using API Version  1
	I1206 18:41:50.085515   71244 main.go:141] libmachine: (addons-463584) Calling .DriverName
	I1206 18:41:50.085533   71244 main.go:141] libmachine: (addons-463584) Calling .DriverName
	I1206 18:41:50.085546   71244 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 18:41:50.085605   71244 main.go:141] libmachine: () Calling .GetVersion
	I1206 18:41:50.087952   71244 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1206 18:41:50.086071   71244 main.go:141] libmachine: Using API Version  1
	I1206 18:41:50.086095   71244 main.go:141] libmachine: () Calling .GetMachineName
	I1206 18:41:50.086650   71244 main.go:141] libmachine: Using API Version  1
	I1206 18:41:50.086834   71244 main.go:141] libmachine: (addons-463584) Calling .DriverName
	I1206 18:41:50.089599   71244 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 18:41:50.089690   71244 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1206 18:41:50.089866   71244 main.go:141] libmachine: (addons-463584) Calling .GetState
	I1206 18:41:50.091515   71244 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.12
	I1206 18:41:50.092940   71244 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1206 18:41:50.092957   71244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1206 18:41:50.092977   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHHostname
	I1206 18:41:50.091611   71244 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 18:41:50.093036   71244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36523
	I1206 18:41:50.091609   71244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1206 18:41:50.093058   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHHostname
	I1206 18:41:50.094319   71244 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1206 18:41:50.094318   71244 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-463584"
	I1206 18:41:50.092374   71244 main.go:141] libmachine: () Calling .GetMachineName
	I1206 18:41:50.094003   71244 main.go:141] libmachine: () Calling .GetMachineName
	I1206 18:41:50.094815   71244 main.go:141] libmachine: () Calling .GetVersion
	I1206 18:41:50.095679   71244 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1206 18:41:50.095699   71244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1206 18:41:50.095725   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHHostname
	I1206 18:41:50.095795   71244 host.go:66] Checking if "addons-463584" exists ...
	I1206 18:41:50.096240   71244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 18:41:50.096263   71244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 18:41:50.096511   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:50.096779   71244 main.go:141] libmachine: (addons-463584) Calling .GetState
	I1206 18:41:50.096839   71244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39805
	I1206 18:41:50.096993   71244 main.go:141] libmachine: (addons-463584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:40:00", ip: ""} in network mk-addons-463584: {Iface:virbr1 ExpiryTime:2023-12-06 19:41:05 +0000 UTC Type:0 Mac:52:54:00:76:40:00 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:addons-463584 Clientid:01:52:54:00:76:40:00}
	I1206 18:41:50.097011   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined IP address 192.168.39.94 and MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:50.097037   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHPort
	I1206 18:41:50.097045   71244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 18:41:50.097077   71244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 18:41:50.097432   71244 main.go:141] libmachine: Using API Version  1
	I1206 18:41:50.097449   71244 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 18:41:50.097510   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHKeyPath
	I1206 18:41:50.097920   71244 main.go:141] libmachine: () Calling .GetMachineName
	I1206 18:41:50.097970   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHUsername
	I1206 18:41:50.098029   71244 main.go:141] libmachine: () Calling .GetVersion
	I1206 18:41:50.098115   71244 main.go:141] libmachine: (addons-463584) Calling .DriverName
	I1206 18:41:50.098160   71244 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/addons-463584/id_rsa Username:docker}
	I1206 18:41:50.098389   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:50.098517   71244 main.go:141] libmachine: Using API Version  1
	I1206 18:41:50.098529   71244 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 18:41:50.098891   71244 main.go:141] libmachine: () Calling .GetMachineName
	I1206 18:41:50.098942   71244 main.go:141] libmachine: (addons-463584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:40:00", ip: ""} in network mk-addons-463584: {Iface:virbr1 ExpiryTime:2023-12-06 19:41:05 +0000 UTC Type:0 Mac:52:54:00:76:40:00 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:addons-463584 Clientid:01:52:54:00:76:40:00}
	I1206 18:41:50.099085   71244 main.go:141] libmachine: (addons-463584) Calling .GetState
	I1206 18:41:50.099131   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined IP address 192.168.39.94 and MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:50.099167   71244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44943
	I1206 18:41:50.099622   71244 main.go:141] libmachine: () Calling .GetVersion
	I1206 18:41:50.099699   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHPort
	I1206 18:41:50.100318   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:50.100318   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHKeyPath
	I1206 18:41:50.100433   71244 main.go:141] libmachine: Using API Version  1
	I1206 18:41:50.100457   71244 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 18:41:50.100496   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHUsername
	I1206 18:41:50.101709   71244 addons.go:231] Setting addon default-storageclass=true in "addons-463584"
	I1206 18:41:50.101747   71244 host.go:66] Checking if "addons-463584" exists ...
	I1206 18:41:50.102113   71244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 18:41:50.102143   71244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 18:41:50.102332   71244 main.go:141] libmachine: (addons-463584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:40:00", ip: ""} in network mk-addons-463584: {Iface:virbr1 ExpiryTime:2023-12-06 19:41:05 +0000 UTC Type:0 Mac:52:54:00:76:40:00 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:addons-463584 Clientid:01:52:54:00:76:40:00}
	I1206 18:41:50.102349   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined IP address 192.168.39.94 and MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:50.102378   71244 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/addons-463584/id_rsa Username:docker}
	I1206 18:41:50.102401   71244 main.go:141] libmachine: () Calling .GetMachineName
	I1206 18:41:50.102442   71244 main.go:141] libmachine: (addons-463584) Calling .DriverName
	I1206 18:41:50.102666   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHPort
	I1206 18:41:50.104424   71244 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1
	I1206 18:41:50.102853   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHKeyPath
	I1206 18:41:50.103061   71244 main.go:141] libmachine: (addons-463584) Calling .GetState
	I1206 18:41:50.105196   71244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45329
	I1206 18:41:50.105967   71244 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1206 18:41:50.105984   71244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1206 18:41:50.106002   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHHostname
	I1206 18:41:50.106151   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHUsername
	I1206 18:41:50.106961   71244 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/addons-463584/id_rsa Username:docker}
	I1206 18:41:50.107477   71244 main.go:141] libmachine: () Calling .GetVersion
	I1206 18:41:50.108336   71244 main.go:141] libmachine: (addons-463584) Calling .DriverName
	I1206 18:41:50.108381   71244 main.go:141] libmachine: Using API Version  1
	I1206 18:41:50.108399   71244 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 18:41:50.109994   71244 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I1206 18:41:50.108944   71244 main.go:141] libmachine: () Calling .GetMachineName
	I1206 18:41:50.111120   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:50.111559   71244 addons.go:423] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1206 18:41:50.111575   71244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1206 18:41:50.111600   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHHostname
	I1206 18:41:50.111603   71244 main.go:141] libmachine: (addons-463584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:40:00", ip: ""} in network mk-addons-463584: {Iface:virbr1 ExpiryTime:2023-12-06 19:41:05 +0000 UTC Type:0 Mac:52:54:00:76:40:00 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:addons-463584 Clientid:01:52:54:00:76:40:00}
	I1206 18:41:50.111622   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined IP address 192.168.39.94 and MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:50.111857   71244 main.go:141] libmachine: (addons-463584) Calling .GetState
	I1206 18:41:50.112519   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHPort
	I1206 18:41:50.112701   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHKeyPath
	I1206 18:41:50.112858   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHUsername
	I1206 18:41:50.112987   71244 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/addons-463584/id_rsa Username:docker}
	I1206 18:41:50.114194   71244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40241
	I1206 18:41:50.114619   71244 main.go:141] libmachine: () Calling .GetVersion
	I1206 18:41:50.115254   71244 main.go:141] libmachine: Using API Version  1
	I1206 18:41:50.115282   71244 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 18:41:50.115350   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:50.115838   71244 main.go:141] libmachine: (addons-463584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:40:00", ip: ""} in network mk-addons-463584: {Iface:virbr1 ExpiryTime:2023-12-06 19:41:05 +0000 UTC Type:0 Mac:52:54:00:76:40:00 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:addons-463584 Clientid:01:52:54:00:76:40:00}
	I1206 18:41:50.115873   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined IP address 192.168.39.94 and MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:50.115918   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHPort
	I1206 18:41:50.116158   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHKeyPath
	I1206 18:41:50.116370   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHUsername
	I1206 18:41:50.116435   71244 main.go:141] libmachine: (addons-463584) Calling .DriverName
	I1206 18:41:50.116483   71244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37867
	I1206 18:41:50.116913   71244 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/addons-463584/id_rsa Username:docker}
	I1206 18:41:50.116966   71244 main.go:141] libmachine: () Calling .GetVersion
	I1206 18:41:50.118811   71244 out.go:177]   - Using image docker.io/registry:2.8.3
	I1206 18:41:50.118868   71244 main.go:141] libmachine: Using API Version  1
	I1206 18:41:50.118751   71244 main.go:141] libmachine: () Calling .GetMachineName
	I1206 18:41:50.118767   71244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37981
	I1206 18:41:50.118715   71244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44171
	I1206 18:41:50.120424   71244 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 18:41:50.120484   71244 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1206 18:41:50.121144   71244 main.go:141] libmachine: () Calling .GetVersion
	I1206 18:41:50.122093   71244 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1206 18:41:50.122120   71244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1206 18:41:50.121148   71244 main.go:141] libmachine: (addons-463584) Calling .GetState
	I1206 18:41:50.122140   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHHostname
	I1206 18:41:50.121148   71244 main.go:141] libmachine: () Calling .GetMachineName
	I1206 18:41:50.121380   71244 main.go:141] libmachine: () Calling .GetVersion
	I1206 18:41:50.122853   71244 main.go:141] libmachine: (addons-463584) Calling .GetState
	I1206 18:41:50.122930   71244 main.go:141] libmachine: Using API Version  1
	I1206 18:41:50.122946   71244 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 18:41:50.123016   71244 main.go:141] libmachine: Using API Version  1
	I1206 18:41:50.123031   71244 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 18:41:50.123310   71244 main.go:141] libmachine: () Calling .GetMachineName
	I1206 18:41:50.123367   71244 main.go:141] libmachine: () Calling .GetMachineName
	I1206 18:41:50.123559   71244 main.go:141] libmachine: (addons-463584) Calling .GetState
	I1206 18:41:50.123604   71244 main.go:141] libmachine: (addons-463584) Calling .GetState
	I1206 18:41:50.125901   71244 main.go:141] libmachine: (addons-463584) Calling .DriverName
	I1206 18:41:50.126108   71244 main.go:141] libmachine: (addons-463584) Calling .DriverName
	I1206 18:41:50.126253   71244 main.go:141] libmachine: (addons-463584) Calling .DriverName
	I1206 18:41:50.128258   71244 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 18:41:50.127030   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:50.127297   71244 main.go:141] libmachine: (addons-463584) Calling .DriverName
	I1206 18:41:50.127571   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHPort
	I1206 18:41:50.129029   71244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36343
	I1206 18:41:50.129912   71244 main.go:141] libmachine: (addons-463584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:40:00", ip: ""} in network mk-addons-463584: {Iface:virbr1 ExpiryTime:2023-12-06 19:41:05 +0000 UTC Type:0 Mac:52:54:00:76:40:00 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:addons-463584 Clientid:01:52:54:00:76:40:00}
	I1206 18:41:50.129935   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined IP address 192.168.39.94 and MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:50.129874   71244 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1206 18:41:50.129944   71244 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 18:41:50.131235   71244 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1206 18:41:50.129964   71244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 18:41:50.129884   71244 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.4
	I1206 18:41:50.129986   71244 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-463584" context rescaled to 1 replicas
	I1206 18:41:50.130188   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHKeyPath
	I1206 18:41:50.130712   71244 main.go:141] libmachine: () Calling .GetVersion
	I1206 18:41:50.132178   71244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37323
	I1206 18:41:50.132854   71244 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.94 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 18:41:50.132889   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHHostname
	I1206 18:41:50.133066   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHUsername
	I1206 18:41:50.134655   71244 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I1206 18:41:50.135145   71244 main.go:141] libmachine: Using API Version  1
	I1206 18:41:50.135196   71244 main.go:141] libmachine: () Calling .GetVersion
	I1206 18:41:50.137560   71244 out.go:177] * Verifying Kubernetes components...
	I1206 18:41:50.136301   71244 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 18:41:50.136314   71244 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1206 18:41:50.136474   71244 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/addons-463584/id_rsa Username:docker}
	I1206 18:41:50.136668   71244 main.go:141] libmachine: Using API Version  1
	I1206 18:41:50.138972   71244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 18:41:50.139010   71244 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1206 18:41:50.139206   71244 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I1206 18:41:50.139306   71244 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 18:41:50.139452   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:50.139740   71244 main.go:141] libmachine: () Calling .GetMachineName
	I1206 18:41:50.140249   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHPort
	I1206 18:41:50.141025   71244 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1206 18:41:50.141043   71244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I1206 18:41:50.142385   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHHostname
	I1206 18:41:50.142407   71244 main.go:141] libmachine: (addons-463584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:40:00", ip: ""} in network mk-addons-463584: {Iface:virbr1 ExpiryTime:2023-12-06 19:41:05 +0000 UTC Type:0 Mac:52:54:00:76:40:00 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:addons-463584 Clientid:01:52:54:00:76:40:00}
	I1206 18:41:50.142438   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined IP address 192.168.39.94 and MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:50.144194   71244 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1206 18:41:50.144215   71244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I1206 18:41:50.144232   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHHostname
	I1206 18:41:50.145856   71244 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1206 18:41:50.142607   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHKeyPath
	I1206 18:41:50.143025   71244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 18:41:50.143059   71244 main.go:141] libmachine: () Calling .GetMachineName
	I1206 18:41:50.145156   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:50.145831   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHPort
	I1206 18:41:50.148996   71244 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1206 18:41:50.147365   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:50.147400   71244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 18:41:50.147469   71244 main.go:141] libmachine: (addons-463584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:40:00", ip: ""} in network mk-addons-463584: {Iface:virbr1 ExpiryTime:2023-12-06 19:41:05 +0000 UTC Type:0 Mac:52:54:00:76:40:00 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:addons-463584 Clientid:01:52:54:00:76:40:00}
	I1206 18:41:50.147474   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHKeyPath
	I1206 18:41:50.147618   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHUsername
	I1206 18:41:50.147968   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHPort
	I1206 18:41:50.148127   71244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 18:41:50.150737   71244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 18:41:50.150745   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined IP address 192.168.39.94 and MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:50.150762   71244 main.go:141] libmachine: (addons-463584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:40:00", ip: ""} in network mk-addons-463584: {Iface:virbr1 ExpiryTime:2023-12-06 19:41:05 +0000 UTC Type:0 Mac:52:54:00:76:40:00 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:addons-463584 Clientid:01:52:54:00:76:40:00}
	I1206 18:41:50.150799   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined IP address 192.168.39.94 and MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:50.152257   71244 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1206 18:41:50.151065   71244 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/addons-463584/id_rsa Username:docker}
	I1206 18:41:50.151103   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHKeyPath
	I1206 18:41:50.151110   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHUsername
	I1206 18:41:50.153032   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHUsername
	I1206 18:41:50.153070   71244 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/addons-463584/id_rsa Username:docker}
	I1206 18:41:50.154216   71244 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1206 18:41:50.155826   71244 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1206 18:41:50.154491   71244 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/addons-463584/id_rsa Username:docker}
	I1206 18:41:50.157281   71244 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1206 18:41:50.157313   71244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1206 18:41:50.157342   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHHostname
	I1206 18:41:50.160686   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:50.161127   71244 main.go:141] libmachine: (addons-463584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:40:00", ip: ""} in network mk-addons-463584: {Iface:virbr1 ExpiryTime:2023-12-06 19:41:05 +0000 UTC Type:0 Mac:52:54:00:76:40:00 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:addons-463584 Clientid:01:52:54:00:76:40:00}
	I1206 18:41:50.161169   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined IP address 192.168.39.94 and MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:50.161342   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHPort
	I1206 18:41:50.161578   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHKeyPath
	I1206 18:41:50.161742   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHUsername
	I1206 18:41:50.161890   71244 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/addons-463584/id_rsa Username:docker}
	I1206 18:41:50.167908   71244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41645
	I1206 18:41:50.168599   71244 main.go:141] libmachine: () Calling .GetVersion
	I1206 18:41:50.169148   71244 main.go:141] libmachine: Using API Version  1
	I1206 18:41:50.169177   71244 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 18:41:50.169520   71244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45935
	I1206 18:41:50.169649   71244 main.go:141] libmachine: () Calling .GetMachineName
	I1206 18:41:50.169851   71244 main.go:141] libmachine: (addons-463584) Calling .GetState
	I1206 18:41:50.169902   71244 main.go:141] libmachine: () Calling .GetVersion
	I1206 18:41:50.170407   71244 main.go:141] libmachine: Using API Version  1
	I1206 18:41:50.170423   71244 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 18:41:50.170732   71244 main.go:141] libmachine: () Calling .GetMachineName
	I1206 18:41:50.170920   71244 main.go:141] libmachine: (addons-463584) Calling .GetState
	I1206 18:41:50.171849   71244 main.go:141] libmachine: (addons-463584) Calling .DriverName
	I1206 18:41:50.172198   71244 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 18:41:50.172252   71244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 18:41:50.172280   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHHostname
	I1206 18:41:50.172452   71244 main.go:141] libmachine: (addons-463584) Calling .DriverName
	I1206 18:41:50.174525   71244 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1206 18:41:50.176111   71244 out.go:177]   - Using image docker.io/busybox:stable
	I1206 18:41:50.174945   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:50.175654   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHPort
	I1206 18:41:50.177659   71244 main.go:141] libmachine: (addons-463584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:40:00", ip: ""} in network mk-addons-463584: {Iface:virbr1 ExpiryTime:2023-12-06 19:41:05 +0000 UTC Type:0 Mac:52:54:00:76:40:00 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:addons-463584 Clientid:01:52:54:00:76:40:00}
	I1206 18:41:50.177686   71244 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1206 18:41:50.177702   71244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1206 18:41:50.177718   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHHostname
	I1206 18:41:50.177691   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined IP address 192.168.39.94 and MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:50.177782   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHKeyPath
	I1206 18:41:50.177999   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHUsername
	I1206 18:41:50.178189   71244 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/addons-463584/id_rsa Username:docker}
	I1206 18:41:50.180619   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:50.181021   71244 main.go:141] libmachine: (addons-463584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:40:00", ip: ""} in network mk-addons-463584: {Iface:virbr1 ExpiryTime:2023-12-06 19:41:05 +0000 UTC Type:0 Mac:52:54:00:76:40:00 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:addons-463584 Clientid:01:52:54:00:76:40:00}
	I1206 18:41:50.181057   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined IP address 192.168.39.94 and MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:50.181192   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHPort
	I1206 18:41:50.181380   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHKeyPath
	I1206 18:41:50.181598   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHUsername
	I1206 18:41:50.181717   71244 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/addons-463584/id_rsa Username:docker}
	W1206 18:41:50.182741   71244 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1206 18:41:50.182760   71244 retry.go:31] will retry after 198.390562ms: ssh: handshake failed: EOF
	I1206 18:41:50.287541   71244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1206 18:41:50.332020   71244 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1206 18:41:50.332043   71244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1206 18:41:50.392438   71244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1206 18:41:50.397725   71244 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1206 18:41:50.397746   71244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1206 18:41:50.404958   71244 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1206 18:41:50.404989   71244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1206 18:41:50.435183   71244 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1206 18:41:50.435219   71244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1206 18:41:50.453458   71244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1206 18:41:50.465275   71244 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1206 18:41:50.465309   71244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1206 18:41:50.472426   71244 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1206 18:41:50.472454   71244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1206 18:41:50.483290   71244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 18:41:50.510160   71244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 18:41:50.528367   71244 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I1206 18:41:50.528403   71244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I1206 18:41:50.557329   71244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1206 18:41:50.586515   71244 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1206 18:41:50.586549   71244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1206 18:41:50.597157   71244 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1206 18:41:50.597184   71244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1206 18:41:50.604046   71244 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1206 18:41:50.604067   71244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1206 18:41:50.605610   71244 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1206 18:41:50.605628   71244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1206 18:41:50.647598   71244 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 18:41:50.647634   71244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1206 18:41:50.667551   71244 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1206 18:41:50.667585   71244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I1206 18:41:50.751411   71244 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1206 18:41:50.751439   71244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1206 18:41:50.807399   71244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 18:41:50.808051   71244 node_ready.go:35] waiting up to 6m0s for node "addons-463584" to be "Ready" ...
	I1206 18:41:50.826747   71244 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1206 18:41:50.826771   71244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1206 18:41:50.840326   71244 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1206 18:41:50.840357   71244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1206 18:41:50.846448   71244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1206 18:41:50.852245   71244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1206 18:41:50.865472   71244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 18:41:50.869384   71244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1206 18:41:50.896760   71244 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1206 18:41:50.896788   71244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1206 18:41:51.053101   71244 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1206 18:41:51.053148   71244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1206 18:41:51.063221   71244 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1206 18:41:51.063250   71244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1206 18:41:51.077384   71244 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1206 18:41:51.077407   71244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1206 18:41:51.135611   71244 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1206 18:41:51.135638   71244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1206 18:41:51.150165   71244 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1206 18:41:51.150193   71244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1206 18:41:51.193997   71244 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1206 18:41:51.194028   71244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1206 18:41:51.247932   71244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1206 18:41:51.274344   71244 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1206 18:41:51.274376   71244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I1206 18:41:51.320480   71244 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1206 18:41:51.320512   71244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1206 18:41:51.343974   71244 node_ready.go:49] node "addons-463584" has status "Ready":"True"
	I1206 18:41:51.344005   71244 node_ready.go:38] duration metric: took 535.909006ms waiting for node "addons-463584" to be "Ready" ...
	I1206 18:41:51.344018   71244 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 18:41:51.369259   71244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1206 18:41:51.428249   71244 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1206 18:41:51.428283   71244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1206 18:41:51.498620   71244 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1206 18:41:51.498647   71244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1206 18:41:51.564607   71244 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1206 18:41:51.564627   71244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1206 18:41:51.620648   71244 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1206 18:41:51.620679   71244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1206 18:41:51.658522   71244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1206 18:41:51.670602   71244 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-5k288" in "kube-system" namespace to be "Ready" ...
	I1206 18:41:54.015379   71244 pod_ready.go:102] pod "coredns-5dd5756b68-5k288" in "kube-system" namespace has status "Ready":"False"
	I1206 18:41:54.859065   71244 pod_ready.go:97] pod "coredns-5dd5756b68-5k288" in "kube-system" namespace has status phase "Failed" (skipping!): {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-12-06 18:41:51 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-12-06 18:41:51 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-12-06 18:41:51 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-12-06 18:41:51 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.94 HostIPs:[] PodIP: PodIPs:[] StartTime:2023-12-06 18:41:51 +0000 UTC InitContainerStatuses:[] ContainerSt
atuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:137,Signal:0,Reason:ContainerStatusUnknown,Message:The container could not be located when the pod was terminated,StartedAt:0001-01-01 00:00:00 +0000 UTC,FinishedAt:0001-01-01 00:00:00 +0000 UTC,ContainerID:,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID: ContainerID: Started:0xc003c6c0aa AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1206 18:41:54.859106   71244 pod_ready.go:81] duration metric: took 3.188472185s waiting for pod "coredns-5dd5756b68-5k288" in "kube-system" namespace to be "Ready" ...
	E1206 18:41:54.859119   71244 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-5dd5756b68-5k288" in "kube-system" namespace has status phase "Failed" (skipping!): {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-12-06 18:41:51 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-12-06 18:41:51 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-12-06 18:41:51 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-12-06 18:41:51 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.94 HostIPs:[] PodIP: PodIPs:[] StartTime:2023-12-06 18:41:51 +0000 UTC InitCon
tainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:137,Signal:0,Reason:ContainerStatusUnknown,Message:The container could not be located when the pod was terminated,StartedAt:0001-01-01 00:00:00 +0000 UTC,FinishedAt:0001-01-01 00:00:00 +0000 UTC,ContainerID:,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID: ContainerID: Started:0xc003c6c0aa AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1206 18:41:54.859152   71244 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-463584" in "kube-system" namespace to be "Ready" ...
	I1206 18:41:55.512548   71244 pod_ready.go:92] pod "etcd-addons-463584" in "kube-system" namespace has status "Ready":"True"
	I1206 18:41:55.512606   71244 pod_ready.go:81] duration metric: took 653.44066ms waiting for pod "etcd-addons-463584" in "kube-system" namespace to be "Ready" ...
	I1206 18:41:55.512627   71244 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-463584" in "kube-system" namespace to be "Ready" ...
	I1206 18:41:55.899759   71244 pod_ready.go:92] pod "kube-apiserver-addons-463584" in "kube-system" namespace has status "Ready":"True"
	I1206 18:41:55.899798   71244 pod_ready.go:81] duration metric: took 387.162591ms waiting for pod "kube-apiserver-addons-463584" in "kube-system" namespace to be "Ready" ...
	I1206 18:41:55.899813   71244 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-463584" in "kube-system" namespace to be "Ready" ...
	I1206 18:41:56.285574   71244 pod_ready.go:92] pod "kube-controller-manager-addons-463584" in "kube-system" namespace has status "Ready":"True"
	I1206 18:41:56.285601   71244 pod_ready.go:81] duration metric: took 385.780133ms waiting for pod "kube-controller-manager-addons-463584" in "kube-system" namespace to be "Ready" ...
	I1206 18:41:56.285612   71244 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tv776" in "kube-system" namespace to be "Ready" ...
	I1206 18:41:56.563100   71244 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.275515591s)
	I1206 18:41:56.563166   71244 main.go:141] libmachine: Making call to close driver server
	I1206 18:41:56.563182   71244 main.go:141] libmachine: (addons-463584) Calling .Close
	I1206 18:41:56.563255   71244 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (6.170782816s)
	I1206 18:41:56.563289   71244 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.109800218s)
	I1206 18:41:56.563301   71244 main.go:141] libmachine: Making call to close driver server
	I1206 18:41:56.563309   71244 main.go:141] libmachine: Making call to close driver server
	I1206 18:41:56.563314   71244 main.go:141] libmachine: (addons-463584) Calling .Close
	I1206 18:41:56.563319   71244 main.go:141] libmachine: (addons-463584) Calling .Close
	I1206 18:41:56.563603   71244 main.go:141] libmachine: (addons-463584) DBG | Closing plugin on server side
	I1206 18:41:56.563637   71244 main.go:141] libmachine: (addons-463584) DBG | Closing plugin on server side
	I1206 18:41:56.563663   71244 main.go:141] libmachine: (addons-463584) DBG | Closing plugin on server side
	I1206 18:41:56.563700   71244 main.go:141] libmachine: Successfully made call to close driver server
	I1206 18:41:56.563730   71244 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 18:41:56.563709   71244 main.go:141] libmachine: Successfully made call to close driver server
	I1206 18:41:56.563764   71244 main.go:141] libmachine: Successfully made call to close driver server
	I1206 18:41:56.563778   71244 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 18:41:56.563780   71244 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 18:41:56.563791   71244 main.go:141] libmachine: Making call to close driver server
	I1206 18:41:56.563802   71244 main.go:141] libmachine: (addons-463584) Calling .Close
	I1206 18:41:56.563792   71244 main.go:141] libmachine: Making call to close driver server
	I1206 18:41:56.563856   71244 main.go:141] libmachine: (addons-463584) Calling .Close
	I1206 18:41:56.563741   71244 main.go:141] libmachine: Making call to close driver server
	I1206 18:41:56.563896   71244 main.go:141] libmachine: (addons-463584) Calling .Close
	I1206 18:41:56.564145   71244 main.go:141] libmachine: (addons-463584) DBG | Closing plugin on server side
	I1206 18:41:56.564177   71244 main.go:141] libmachine: (addons-463584) DBG | Closing plugin on server side
	I1206 18:41:56.564180   71244 main.go:141] libmachine: Successfully made call to close driver server
	I1206 18:41:56.564193   71244 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 18:41:56.564211   71244 main.go:141] libmachine: Successfully made call to close driver server
	I1206 18:41:56.564220   71244 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 18:41:56.564258   71244 main.go:141] libmachine: (addons-463584) DBG | Closing plugin on server side
	I1206 18:41:56.564272   71244 main.go:141] libmachine: Successfully made call to close driver server
	I1206 18:41:56.564288   71244 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 18:41:57.619610   71244 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.13627402s)
	I1206 18:41:57.619649   71244 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.109449523s)
	I1206 18:41:57.619673   71244 main.go:141] libmachine: Making call to close driver server
	I1206 18:41:57.619687   71244 main.go:141] libmachine: (addons-463584) Calling .Close
	I1206 18:41:57.619690   71244 main.go:141] libmachine: Making call to close driver server
	I1206 18:41:57.619704   71244 main.go:141] libmachine: (addons-463584) Calling .Close
	I1206 18:41:57.620059   71244 main.go:141] libmachine: Successfully made call to close driver server
	I1206 18:41:57.620078   71244 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 18:41:57.620089   71244 main.go:141] libmachine: Making call to close driver server
	I1206 18:41:57.620086   71244 main.go:141] libmachine: Successfully made call to close driver server
	I1206 18:41:57.620098   71244 main.go:141] libmachine: (addons-463584) Calling .Close
	I1206 18:41:57.620109   71244 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 18:41:57.620124   71244 main.go:141] libmachine: Making call to close driver server
	I1206 18:41:57.620132   71244 main.go:141] libmachine: (addons-463584) Calling .Close
	I1206 18:41:57.620059   71244 main.go:141] libmachine: (addons-463584) DBG | Closing plugin on server side
	I1206 18:41:57.620422   71244 main.go:141] libmachine: Successfully made call to close driver server
	I1206 18:41:57.620441   71244 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 18:41:57.620477   71244 main.go:141] libmachine: Successfully made call to close driver server
	I1206 18:41:57.620500   71244 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 18:41:57.620476   71244 main.go:141] libmachine: (addons-463584) DBG | Closing plugin on server side
	I1206 18:41:57.893171   71244 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1206 18:41:57.893244   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHHostname
	I1206 18:41:57.896426   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:57.896984   71244 main.go:141] libmachine: (addons-463584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:40:00", ip: ""} in network mk-addons-463584: {Iface:virbr1 ExpiryTime:2023-12-06 19:41:05 +0000 UTC Type:0 Mac:52:54:00:76:40:00 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:addons-463584 Clientid:01:52:54:00:76:40:00}
	I1206 18:41:57.897021   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined IP address 192.168.39.94 and MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:57.897182   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHPort
	I1206 18:41:57.897530   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHKeyPath
	I1206 18:41:57.897725   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHUsername
	I1206 18:41:57.897910   71244 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/addons-463584/id_rsa Username:docker}
	I1206 18:41:57.998234   71244 main.go:141] libmachine: Making call to close driver server
	I1206 18:41:57.998263   71244 main.go:141] libmachine: (addons-463584) Calling .Close
	I1206 18:41:57.998590   71244 main.go:141] libmachine: Successfully made call to close driver server
	I1206 18:41:57.998616   71244 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 18:41:58.088678   71244 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1206 18:41:58.141459   71244 addons.go:231] Setting addon gcp-auth=true in "addons-463584"
	I1206 18:41:58.141515   71244 host.go:66] Checking if "addons-463584" exists ...
	I1206 18:41:58.141824   71244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 18:41:58.141855   71244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 18:41:58.156691   71244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42325
	I1206 18:41:58.157240   71244 main.go:141] libmachine: () Calling .GetVersion
	I1206 18:41:58.157876   71244 main.go:141] libmachine: Using API Version  1
	I1206 18:41:58.157900   71244 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 18:41:58.158257   71244 main.go:141] libmachine: () Calling .GetMachineName
	I1206 18:41:58.158872   71244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 18:41:58.158933   71244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 18:41:58.173731   71244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38203
	I1206 18:41:58.174266   71244 main.go:141] libmachine: () Calling .GetVersion
	I1206 18:41:58.174764   71244 main.go:141] libmachine: Using API Version  1
	I1206 18:41:58.174789   71244 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 18:41:58.175156   71244 main.go:141] libmachine: () Calling .GetMachineName
	I1206 18:41:58.175374   71244 main.go:141] libmachine: (addons-463584) Calling .GetState
	I1206 18:41:58.177128   71244 main.go:141] libmachine: (addons-463584) Calling .DriverName
	I1206 18:41:58.177397   71244 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1206 18:41:58.177429   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHHostname
	I1206 18:41:58.180255   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:58.180669   71244 main.go:141] libmachine: (addons-463584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:40:00", ip: ""} in network mk-addons-463584: {Iface:virbr1 ExpiryTime:2023-12-06 19:41:05 +0000 UTC Type:0 Mac:52:54:00:76:40:00 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:addons-463584 Clientid:01:52:54:00:76:40:00}
	I1206 18:41:58.180701   71244 main.go:141] libmachine: (addons-463584) DBG | domain addons-463584 has defined IP address 192.168.39.94 and MAC address 52:54:00:76:40:00 in network mk-addons-463584
	I1206 18:41:58.180863   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHPort
	I1206 18:41:58.181068   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHKeyPath
	I1206 18:41:58.181223   71244 main.go:141] libmachine: (addons-463584) Calling .GetSSHUsername
	I1206 18:41:58.181372   71244 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/addons-463584/id_rsa Username:docker}
	I1206 18:41:58.450235   71244 pod_ready.go:102] pod "kube-proxy-tv776" in "kube-system" namespace has status "Ready":"False"
	I1206 18:41:59.486194   71244 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.928824328s)
	I1206 18:41:59.486254   71244 main.go:141] libmachine: Making call to close driver server
	I1206 18:41:59.486267   71244 main.go:141] libmachine: (addons-463584) Calling .Close
	I1206 18:41:59.486297   71244 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (8.678857838s)
	I1206 18:41:59.486330   71244 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1206 18:41:59.486405   71244 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.639922381s)
	I1206 18:41:59.486428   71244 main.go:141] libmachine: Making call to close driver server
	I1206 18:41:59.486456   71244 main.go:141] libmachine: (addons-463584) Calling .Close
	I1206 18:41:59.486504   71244 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.634215755s)
	I1206 18:41:59.486545   71244 main.go:141] libmachine: Making call to close driver server
	I1206 18:41:59.486564   71244 main.go:141] libmachine: (addons-463584) Calling .Close
	I1206 18:41:59.486610   71244 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.621108346s)
	I1206 18:41:59.486642   71244 main.go:141] libmachine: Making call to close driver server
	I1206 18:41:59.486691   71244 main.go:141] libmachine: Successfully made call to close driver server
	I1206 18:41:59.486727   71244 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 18:41:59.486747   71244 main.go:141] libmachine: (addons-463584) DBG | Closing plugin on server side
	I1206 18:41:59.486773   71244 main.go:141] libmachine: Making call to close driver server
	I1206 18:41:59.486791   71244 main.go:141] libmachine: (addons-463584) Calling .Close
	I1206 18:41:59.486805   71244 main.go:141] libmachine: Successfully made call to close driver server
	I1206 18:41:59.486814   71244 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 18:41:59.486826   71244 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.238859096s)
	I1206 18:41:59.486841   71244 main.go:141] libmachine: (addons-463584) DBG | Closing plugin on server side
	W1206 18:41:59.486863   71244 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1206 18:41:59.486877   71244 main.go:141] libmachine: Successfully made call to close driver server
	I1206 18:41:59.486886   71244 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 18:41:59.486894   71244 main.go:141] libmachine: Making call to close driver server
	I1206 18:41:59.486893   71244 retry.go:31] will retry after 136.376105ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1206 18:41:59.486903   71244 main.go:141] libmachine: (addons-463584) Calling .Close
	I1206 18:41:59.486708   71244 main.go:141] libmachine: (addons-463584) Calling .Close
	I1206 18:41:59.486701   71244 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.617283072s)
	I1206 18:41:59.486984   71244 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (8.117686196s)
	I1206 18:41:59.486831   71244 main.go:141] libmachine: Making call to close driver server
	I1206 18:41:59.487008   71244 main.go:141] libmachine: Making call to close driver server
	I1206 18:41:59.487010   71244 main.go:141] libmachine: (addons-463584) Calling .Close
	I1206 18:41:59.487018   71244 main.go:141] libmachine: (addons-463584) Calling .Close
	I1206 18:41:59.487009   71244 main.go:141] libmachine: Making call to close driver server
	I1206 18:41:59.487047   71244 main.go:141] libmachine: (addons-463584) Calling .Close
	I1206 18:41:59.487097   71244 main.go:141] libmachine: (addons-463584) DBG | Closing plugin on server side
	I1206 18:41:59.487124   71244 main.go:141] libmachine: Successfully made call to close driver server
	I1206 18:41:59.487138   71244 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 18:41:59.487149   71244 addons.go:467] Verifying addon ingress=true in "addons-463584"
	I1206 18:41:59.487198   71244 main.go:141] libmachine: (addons-463584) DBG | Closing plugin on server side
	I1206 18:41:59.487234   71244 main.go:141] libmachine: Successfully made call to close driver server
	I1206 18:41:59.487248   71244 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 18:41:59.490768   71244 out.go:177] * Verifying ingress addon...
	I1206 18:41:59.487843   71244 main.go:141] libmachine: Successfully made call to close driver server
	I1206 18:41:59.487869   71244 main.go:141] libmachine: (addons-463584) DBG | Closing plugin on server side
	I1206 18:41:59.487884   71244 main.go:141] libmachine: (addons-463584) DBG | Closing plugin on server side
	I1206 18:41:59.487906   71244 main.go:141] libmachine: Successfully made call to close driver server
	I1206 18:41:59.487921   71244 main.go:141] libmachine: (addons-463584) DBG | Closing plugin on server side
	I1206 18:41:59.487938   71244 main.go:141] libmachine: Successfully made call to close driver server
	I1206 18:41:59.487954   71244 main.go:141] libmachine: (addons-463584) DBG | Closing plugin on server side
	I1206 18:41:59.487979   71244 main.go:141] libmachine: Successfully made call to close driver server
	I1206 18:41:59.492144   71244 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 18:41:59.492162   71244 main.go:141] libmachine: Making call to close driver server
	I1206 18:41:59.492161   71244 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 18:41:59.492186   71244 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 18:41:59.492205   71244 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 18:41:59.492190   71244 addons.go:467] Verifying addon registry=true in "addons-463584"
	I1206 18:41:59.492208   71244 main.go:141] libmachine: Making call to close driver server
	I1206 18:41:59.492261   71244 main.go:141] libmachine: (addons-463584) Calling .Close
	I1206 18:41:59.494097   71244 out.go:177] * Verifying registry addon...
	I1206 18:41:59.492172   71244 main.go:141] libmachine: (addons-463584) Calling .Close
	I1206 18:41:59.492344   71244 main.go:141] libmachine: Making call to close driver server
	I1206 18:41:59.492517   71244 main.go:141] libmachine: Successfully made call to close driver server
	I1206 18:41:59.492541   71244 main.go:141] libmachine: (addons-463584) DBG | Closing plugin on server side
	I1206 18:41:59.493012   71244 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1206 18:41:59.494201   71244 main.go:141] libmachine: (addons-463584) Calling .Close
	I1206 18:41:59.494265   71244 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 18:41:59.494503   71244 main.go:141] libmachine: Successfully made call to close driver server
	I1206 18:41:59.496501   71244 addons.go:467] Verifying addon metrics-server=true in "addons-463584"
	I1206 18:41:59.496521   71244 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 18:41:59.494542   71244 main.go:141] libmachine: (addons-463584) DBG | Closing plugin on server side
	I1206 18:41:59.494561   71244 main.go:141] libmachine: (addons-463584) DBG | Closing plugin on server side
	I1206 18:41:59.494576   71244 main.go:141] libmachine: Successfully made call to close driver server
	I1206 18:41:59.496580   71244 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 18:41:59.497619   71244 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1206 18:41:59.521029   71244 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1206 18:41:59.521051   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:41:59.521700   71244 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1206 18:41:59.521719   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:41:59.545401   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:41:59.548902   71244 main.go:141] libmachine: Making call to close driver server
	I1206 18:41:59.548922   71244 main.go:141] libmachine: (addons-463584) Calling .Close
	I1206 18:41:59.549179   71244 main.go:141] libmachine: Successfully made call to close driver server
	I1206 18:41:59.549195   71244 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 18:41:59.555800   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:41:59.623799   71244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1206 18:42:00.061753   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:00.098376   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:00.605026   71244 pod_ready.go:102] pod "kube-proxy-tv776" in "kube-system" namespace has status "Ready":"False"
	I1206 18:42:00.695767   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:00.696104   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:00.828506   71244 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (9.169913011s)
	I1206 18:42:00.828521   71244 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.651101812s)
	I1206 18:42:00.828571   71244 main.go:141] libmachine: Making call to close driver server
	I1206 18:42:00.828584   71244 main.go:141] libmachine: (addons-463584) Calling .Close
	I1206 18:42:00.830578   71244 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1206 18:42:00.828895   71244 main.go:141] libmachine: Successfully made call to close driver server
	I1206 18:42:00.828919   71244 main.go:141] libmachine: (addons-463584) DBG | Closing plugin on server side
	I1206 18:42:00.833636   71244 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1206 18:42:00.832254   71244 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 18:42:00.835207   71244 main.go:141] libmachine: Making call to close driver server
	I1206 18:42:00.835222   71244 main.go:141] libmachine: (addons-463584) Calling .Close
	I1206 18:42:00.835246   71244 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1206 18:42:00.835269   71244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1206 18:42:00.835540   71244 main.go:141] libmachine: (addons-463584) DBG | Closing plugin on server side
	I1206 18:42:00.835592   71244 main.go:141] libmachine: Successfully made call to close driver server
	I1206 18:42:00.835602   71244 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 18:42:00.835621   71244 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-463584"
	I1206 18:42:00.837290   71244 out.go:177] * Verifying csi-hostpath-driver addon...
	I1206 18:42:00.839653   71244 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1206 18:42:00.945465   71244 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1206 18:42:00.945493   71244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1206 18:42:00.979947   71244 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1206 18:42:00.979977   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:01.044109   71244 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1206 18:42:01.044138   71244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I1206 18:42:01.080427   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:01.127713   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:01.133526   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:01.144717   71244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1206 18:42:01.610687   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:01.674359   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:01.675105   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:02.067719   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:02.070710   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:02.093200   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:02.565446   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:02.607098   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:02.607946   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:02.679307   71244 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.055453505s)
	I1206 18:42:02.679373   71244 main.go:141] libmachine: Making call to close driver server
	I1206 18:42:02.679390   71244 main.go:141] libmachine: (addons-463584) Calling .Close
	I1206 18:42:02.679734   71244 main.go:141] libmachine: (addons-463584) DBG | Closing plugin on server side
	I1206 18:42:02.679781   71244 main.go:141] libmachine: Successfully made call to close driver server
	I1206 18:42:02.679805   71244 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 18:42:02.679824   71244 main.go:141] libmachine: Making call to close driver server
	I1206 18:42:02.679836   71244 main.go:141] libmachine: (addons-463584) Calling .Close
	I1206 18:42:02.680124   71244 main.go:141] libmachine: Successfully made call to close driver server
	I1206 18:42:02.680155   71244 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 18:42:02.899677   71244 pod_ready.go:102] pod "kube-proxy-tv776" in "kube-system" namespace has status "Ready":"False"
	I1206 18:42:03.087030   71244 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.942266842s)
	I1206 18:42:03.087085   71244 main.go:141] libmachine: Making call to close driver server
	I1206 18:42:03.087098   71244 main.go:141] libmachine: (addons-463584) Calling .Close
	I1206 18:42:03.087404   71244 main.go:141] libmachine: Successfully made call to close driver server
	I1206 18:42:03.087426   71244 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 18:42:03.087433   71244 main.go:141] libmachine: (addons-463584) DBG | Closing plugin on server side
	I1206 18:42:03.087436   71244 main.go:141] libmachine: Making call to close driver server
	I1206 18:42:03.087447   71244 main.go:141] libmachine: (addons-463584) Calling .Close
	I1206 18:42:03.087654   71244 main.go:141] libmachine: Successfully made call to close driver server
	I1206 18:42:03.087669   71244 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 18:42:03.089763   71244 addons.go:467] Verifying addon gcp-auth=true in "addons-463584"
	I1206 18:42:03.091973   71244 out.go:177] * Verifying gcp-auth addon...
	I1206 18:42:03.094464   71244 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1206 18:42:03.104170   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:03.104539   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:03.117670   71244 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1206 18:42:03.117696   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:03.118213   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:03.158282   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:03.552072   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:03.562600   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:03.606613   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:03.666849   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:04.051738   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:04.066908   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:04.086088   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:04.162252   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:04.550852   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:04.560989   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:04.586390   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:04.663197   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:05.051309   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:05.061542   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:05.087299   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:05.163840   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:05.385975   71244 pod_ready.go:102] pod "kube-proxy-tv776" in "kube-system" namespace has status "Ready":"False"
	I1206 18:42:05.552148   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:05.561558   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:05.586723   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:05.662453   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:06.051375   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:06.061335   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:06.089950   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:06.167103   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:06.552018   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:06.562273   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:06.586369   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:06.662636   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:07.051370   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:07.061355   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:07.087096   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:07.162074   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:07.386532   71244 pod_ready.go:102] pod "kube-proxy-tv776" in "kube-system" namespace has status "Ready":"False"
	I1206 18:42:07.552889   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:07.560470   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:07.589244   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:07.662424   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:08.059609   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:08.071815   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:08.088429   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:08.162530   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:08.387294   71244 pod_ready.go:92] pod "kube-proxy-tv776" in "kube-system" namespace has status "Ready":"True"
	I1206 18:42:08.387318   71244 pod_ready.go:81] duration metric: took 12.101700517s waiting for pod "kube-proxy-tv776" in "kube-system" namespace to be "Ready" ...
	I1206 18:42:08.387327   71244 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-463584" in "kube-system" namespace to be "Ready" ...
	I1206 18:42:08.392708   71244 pod_ready.go:92] pod "kube-scheduler-addons-463584" in "kube-system" namespace has status "Ready":"True"
	I1206 18:42:08.392728   71244 pod_ready.go:81] duration metric: took 5.395256ms waiting for pod "kube-scheduler-addons-463584" in "kube-system" namespace to be "Ready" ...
	I1206 18:42:08.392734   71244 pod_ready.go:38] duration metric: took 17.048703584s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 18:42:08.392750   71244 api_server.go:52] waiting for apiserver process to appear ...
	I1206 18:42:08.392822   71244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 18:42:08.409215   71244 api_server.go:72] duration metric: took 18.272999298s to wait for apiserver process to appear ...
	I1206 18:42:08.409260   71244 api_server.go:88] waiting for apiserver healthz status ...
	I1206 18:42:08.409277   71244 api_server.go:253] Checking apiserver healthz at https://192.168.39.94:8443/healthz ...
	I1206 18:42:08.415235   71244 api_server.go:279] https://192.168.39.94:8443/healthz returned 200:
	ok
	I1206 18:42:08.416663   71244 api_server.go:141] control plane version: v1.28.4
	I1206 18:42:08.416692   71244 api_server.go:131] duration metric: took 7.424879ms to wait for apiserver health ...
	I1206 18:42:08.416704   71244 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 18:42:08.424923   71244 system_pods.go:59] 18 kube-system pods found
	I1206 18:42:08.424956   71244 system_pods.go:61] "coredns-5dd5756b68-zr82n" [24c2879a-3680-449d-b5ee-693c29e7e488] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 18:42:08.424963   71244 system_pods.go:61] "csi-hostpath-attacher-0" [0d06f6e4-b372-4564-bfff-0a47aaef8f85] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1206 18:42:08.424973   71244 system_pods.go:61] "csi-hostpath-resizer-0" [a31831fd-9151-4a97-b93d-142bdb5c2648] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1206 18:42:08.424980   71244 system_pods.go:61] "csi-hostpathplugin-flns5" [adaad5af-f54f-4af8-b891-02e38cdb1b38] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1206 18:42:08.424985   71244 system_pods.go:61] "etcd-addons-463584" [08381a56-de36-4564-a709-b036a00864a0] Running
	I1206 18:42:08.424989   71244 system_pods.go:61] "kube-apiserver-addons-463584" [e167280f-1793-4b18-b432-33da6f73ec8b] Running
	I1206 18:42:08.424994   71244 system_pods.go:61] "kube-controller-manager-addons-463584" [e8547f7e-0acf-4d3d-9950-df7101f48b5c] Running
	I1206 18:42:08.425000   71244 system_pods.go:61] "kube-ingress-dns-minikube" [4dc0de3f-4f81-4871-bad8-1895e8cc7190] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1206 18:42:08.425004   71244 system_pods.go:61] "kube-proxy-tv776" [c4aa92b0-8a65-46a1-bf5d-048065163dd7] Running
	I1206 18:42:08.425008   71244 system_pods.go:61] "kube-scheduler-addons-463584" [221b8940-56cf-4ef3-a11b-5e7f12d8bbcb] Running
	I1206 18:42:08.425016   71244 system_pods.go:61] "metrics-server-7c66d45ddc-c2xz8" [613bd5aa-3d2c-4f94-8aa2-48ed4494f773] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 18:42:08.425022   71244 system_pods.go:61] "nvidia-device-plugin-daemonset-7xjql" [6bad12d8-6f03-44fe-9b7e-a74f9991b664] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1206 18:42:08.425040   71244 system_pods.go:61] "registry-6thb7" [0b3afa89-7f8a-4644-963a-c31b40f2a80d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1206 18:42:08.425046   71244 system_pods.go:61] "registry-proxy-smdkb" [e907c989-b9af-4449-a8a9-628e470fe380] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1206 18:42:08.425051   71244 system_pods.go:61] "snapshot-controller-58dbcc7b99-dsjhr" [28444feb-d596-4e27-bf0c-5b28fffc5800] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 18:42:08.425058   71244 system_pods.go:61] "snapshot-controller-58dbcc7b99-vxfrc" [71bb4e47-a614-4674-8fca-2c3dedbe5eca] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 18:42:08.425065   71244 system_pods.go:61] "storage-provisioner" [22df35a9-9f09-4a46-af08-ecc84d038638] Running
	I1206 18:42:08.425070   71244 system_pods.go:61] "tiller-deploy-7b677967b9-sjrq2" [c3367096-b874-410e-ad47-aa17ee4de5b2] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1206 18:42:08.425079   71244 system_pods.go:74] duration metric: took 8.365147ms to wait for pod list to return data ...
	I1206 18:42:08.425086   71244 default_sa.go:34] waiting for default service account to be created ...
	I1206 18:42:08.427517   71244 default_sa.go:45] found service account: "default"
	I1206 18:42:08.427547   71244 default_sa.go:55] duration metric: took 2.448528ms for default service account to be created ...
	I1206 18:42:08.427556   71244 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 18:42:08.442668   71244 system_pods.go:86] 18 kube-system pods found
	I1206 18:42:08.442796   71244 system_pods.go:89] "coredns-5dd5756b68-zr82n" [24c2879a-3680-449d-b5ee-693c29e7e488] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 18:42:08.442819   71244 system_pods.go:89] "csi-hostpath-attacher-0" [0d06f6e4-b372-4564-bfff-0a47aaef8f85] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1206 18:42:08.442828   71244 system_pods.go:89] "csi-hostpath-resizer-0" [a31831fd-9151-4a97-b93d-142bdb5c2648] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1206 18:42:08.442835   71244 system_pods.go:89] "csi-hostpathplugin-flns5" [adaad5af-f54f-4af8-b891-02e38cdb1b38] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1206 18:42:08.442843   71244 system_pods.go:89] "etcd-addons-463584" [08381a56-de36-4564-a709-b036a00864a0] Running
	I1206 18:42:08.442851   71244 system_pods.go:89] "kube-apiserver-addons-463584" [e167280f-1793-4b18-b432-33da6f73ec8b] Running
	I1206 18:42:08.442858   71244 system_pods.go:89] "kube-controller-manager-addons-463584" [e8547f7e-0acf-4d3d-9950-df7101f48b5c] Running
	I1206 18:42:08.442864   71244 system_pods.go:89] "kube-ingress-dns-minikube" [4dc0de3f-4f81-4871-bad8-1895e8cc7190] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1206 18:42:08.442871   71244 system_pods.go:89] "kube-proxy-tv776" [c4aa92b0-8a65-46a1-bf5d-048065163dd7] Running
	I1206 18:42:08.442876   71244 system_pods.go:89] "kube-scheduler-addons-463584" [221b8940-56cf-4ef3-a11b-5e7f12d8bbcb] Running
	I1206 18:42:08.442882   71244 system_pods.go:89] "metrics-server-7c66d45ddc-c2xz8" [613bd5aa-3d2c-4f94-8aa2-48ed4494f773] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 18:42:08.442895   71244 system_pods.go:89] "nvidia-device-plugin-daemonset-7xjql" [6bad12d8-6f03-44fe-9b7e-a74f9991b664] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1206 18:42:08.442904   71244 system_pods.go:89] "registry-6thb7" [0b3afa89-7f8a-4644-963a-c31b40f2a80d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1206 18:42:08.442911   71244 system_pods.go:89] "registry-proxy-smdkb" [e907c989-b9af-4449-a8a9-628e470fe380] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1206 18:42:08.442919   71244 system_pods.go:89] "snapshot-controller-58dbcc7b99-dsjhr" [28444feb-d596-4e27-bf0c-5b28fffc5800] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 18:42:08.442928   71244 system_pods.go:89] "snapshot-controller-58dbcc7b99-vxfrc" [71bb4e47-a614-4674-8fca-2c3dedbe5eca] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 18:42:08.442935   71244 system_pods.go:89] "storage-provisioner" [22df35a9-9f09-4a46-af08-ecc84d038638] Running
	I1206 18:42:08.442942   71244 system_pods.go:89] "tiller-deploy-7b677967b9-sjrq2" [c3367096-b874-410e-ad47-aa17ee4de5b2] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1206 18:42:08.442952   71244 system_pods.go:126] duration metric: took 15.386824ms to wait for k8s-apps to be running ...
	I1206 18:42:08.442962   71244 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 18:42:08.443011   71244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 18:42:08.460050   71244 system_svc.go:56] duration metric: took 17.076504ms WaitForService to wait for kubelet.
	I1206 18:42:08.460089   71244 kubeadm.go:581] duration metric: took 18.323881649s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1206 18:42:08.460116   71244 node_conditions.go:102] verifying NodePressure condition ...
	I1206 18:42:08.463435   71244 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1206 18:42:08.463473   71244 node_conditions.go:123] node cpu capacity is 2
	I1206 18:42:08.463489   71244 node_conditions.go:105] duration metric: took 3.3673ms to run NodePressure ...
	I1206 18:42:08.463504   71244 start.go:228] waiting for startup goroutines ...
	I1206 18:42:08.553052   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:08.561307   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:08.589705   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:08.662525   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:09.050389   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:09.063834   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:09.085946   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:09.162039   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:09.551066   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:09.561595   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:09.587215   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:09.674337   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:10.051327   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:10.065925   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:10.088355   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:10.167665   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:10.551722   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:10.562688   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:10.587330   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:10.663128   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:11.052694   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:11.064881   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:11.087820   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:11.163293   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:11.552382   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:11.561106   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:11.587325   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:11.662728   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:12.050680   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:12.062199   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:12.087097   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:12.162810   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:12.550970   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:12.561274   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:12.591719   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:12.662401   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:13.051290   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:13.061219   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:13.088507   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:13.162516   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:13.555337   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:13.562969   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:13.587610   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:13.662557   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:14.051522   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:14.061785   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:14.087205   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:14.162613   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:14.553461   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:14.570313   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:14.586855   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:14.664042   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:15.050586   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:15.063428   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:15.091263   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:15.164019   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:15.550600   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:15.560490   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:15.587861   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:15.662639   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:16.059692   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:16.069069   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:16.088789   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:16.164750   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:16.567305   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:16.568790   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:16.603433   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:16.665942   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:17.050866   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:17.065178   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:17.091361   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:17.162069   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:17.552659   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:17.561065   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:17.586258   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:17.662954   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:18.050703   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:18.060671   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:18.096366   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:18.162983   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:18.555493   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:18.562769   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:18.587687   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:18.662758   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:19.049993   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:19.061591   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:19.086819   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:19.163324   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:19.553602   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:19.574905   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:19.599687   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:19.662536   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:20.050665   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:20.067400   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:20.086752   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:20.162314   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:20.550694   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:20.565938   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:20.587114   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:20.662085   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:21.050694   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:21.063097   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:21.086480   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:21.162741   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:21.551398   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:21.561448   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:21.587359   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:21.665757   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:22.210485   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:22.212141   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:22.217710   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:22.219555   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:22.550416   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:22.561193   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:22.586702   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:22.664416   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:23.051755   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:23.061256   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:23.086913   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:23.162839   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:23.550771   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:23.563704   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:23.586521   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:23.662865   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:24.050747   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:24.060792   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:24.087773   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:24.164871   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:24.551395   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:24.561032   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:24.587318   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:24.662095   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:25.052508   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:25.060684   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:25.086989   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:25.162673   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:25.551830   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:25.561561   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:25.586956   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:25.662576   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:26.052572   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:26.060355   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:26.086727   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:26.163260   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:26.552414   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:26.561937   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:26.587231   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:26.663033   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:27.050478   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:27.060187   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:27.086471   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:27.162185   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:27.551642   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:27.561092   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:27.586157   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:27.662229   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:28.051391   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:28.060999   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:28.086260   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:28.162403   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:28.551537   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:28.561121   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:28.586442   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:28.663040   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:29.052381   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:29.060960   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:29.086019   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:29.174141   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:29.551751   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:29.561867   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:29.586862   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:29.663982   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:30.050638   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:30.060962   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:30.090002   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:30.162806   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:30.550585   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:30.565876   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:30.589297   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:30.662786   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:31.052456   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:31.061098   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:31.090830   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:31.162549   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:31.552269   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:31.561625   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:31.587666   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:31.662559   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:32.051182   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:32.061975   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:32.086488   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:32.202253   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:32.550948   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:32.560542   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:32.586180   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:32.663797   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:33.050576   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:33.061185   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:33.087144   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:33.163202   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:33.551408   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:33.563887   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:33.587654   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:33.663972   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:34.051151   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:34.061254   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:34.086892   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:34.162965   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:34.550664   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:34.563711   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:34.587048   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:34.664191   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:35.050395   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:35.062366   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:35.087440   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:35.163427   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:35.551392   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:35.562581   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:35.590689   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:35.663699   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:36.068031   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:36.073219   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:36.091999   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:36.163009   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:36.550334   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:36.561743   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:36.603780   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:36.662678   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:37.050568   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:37.066143   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:37.091208   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:37.173471   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:37.551495   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:37.568942   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:37.586688   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:37.662275   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:38.050995   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:38.061748   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:38.087711   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:38.177049   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:38.551233   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:38.561590   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:38.586477   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:38.668558   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:39.051344   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:39.061170   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:39.086638   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:39.162878   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:39.551600   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:39.561684   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:39.587359   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:39.662776   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:40.050198   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:40.063420   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:40.088106   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:40.168306   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:40.551022   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:40.561134   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:40.589039   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:40.663119   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:41.051065   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:41.062227   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:41.086406   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:41.162564   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:41.744931   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:41.745517   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:41.746898   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:41.750170   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:42.051451   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:42.061697   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:42.088842   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:42.163920   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:42.551222   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:42.561942   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:42.586347   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:42.662584   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:43.052015   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:43.060742   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:43.089529   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:43.162932   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:43.550733   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:43.561052   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:43.589842   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:43.665468   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:44.050626   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:44.060335   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:44.087265   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:44.162283   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:44.550527   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:44.561163   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:44.586299   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:44.663072   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:45.049937   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:45.064462   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:45.089122   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:45.162388   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:45.552267   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:45.563901   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:45.587199   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:45.662924   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:46.054920   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:46.061027   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:46.089156   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:46.162711   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:46.550455   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:46.564414   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:46.586527   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:46.662664   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:47.050871   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:47.062268   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:47.087466   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:47.162894   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:47.550444   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:47.561221   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:47.586476   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:47.662507   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:48.050450   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:48.060350   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:48.086283   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:48.162276   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:48.553334   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:48.573607   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:48.598328   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:48.665605   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:49.052671   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:49.063439   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:49.092350   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:49.162896   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:49.550733   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:49.560277   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:49.586580   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:49.672272   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:50.050795   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:50.062811   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:50.087341   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:50.163732   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:50.552060   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:50.564013   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:50.586970   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:51.109798   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:51.111578   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:51.120811   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:51.125364   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:51.168308   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:51.551109   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:51.564211   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:51.590938   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:51.663066   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:52.050726   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:52.065028   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:52.086405   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:52.162305   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:52.552514   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:52.562637   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:52.589190   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:52.663717   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:53.051446   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:53.061687   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:53.089933   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:53.588580   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:53.594006   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:53.594356   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:53.596441   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:53.662724   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:54.050939   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:54.061690   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:54.087540   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:54.162272   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:54.551335   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:54.561540   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:54.587727   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:54.662783   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:55.052045   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:55.060809   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:55.087374   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:55.164712   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:55.551631   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:55.560992   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:55.592693   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:55.662692   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:56.052167   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:56.061085   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:56.086240   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:56.162522   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:56.549909   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:56.560455   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:56.587053   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:56.663154   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:57.063402   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:57.065424   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:57.087508   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:57.166553   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:57.550211   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:57.561380   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:42:57.590954   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:57.664582   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:58.052483   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:58.061652   71244 kapi.go:107] duration metric: took 58.564026618s to wait for kubernetes.io/minikube-addons=registry ...
	I1206 18:42:58.094158   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:58.162786   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:58.551421   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:58.586137   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:58.662793   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:59.051349   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:59.102110   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:59.193563   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:42:59.551312   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:42:59.587432   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:42:59.663222   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:43:00.050959   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:43:00.091165   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:43:00.163078   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:43:00.776894   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:43:00.777062   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:43:00.780268   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:43:01.050535   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:43:01.087263   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:43:01.163187   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:43:01.551579   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:43:01.588620   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:43:01.668075   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:43:02.050975   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:43:02.087207   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:43:02.162211   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:43:02.551696   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:43:02.588045   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:43:02.662287   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:43:03.051678   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:43:03.087088   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:43:03.163196   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:43:03.560769   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:43:03.587826   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:43:03.665581   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:43:04.059583   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:43:04.089770   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:43:04.190586   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:43:04.551305   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:43:04.594873   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:43:04.665836   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:43:05.077576   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:43:05.110760   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:43:05.164956   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:43:05.558802   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:43:05.587482   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:43:05.662528   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:43:06.108918   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:43:06.122031   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:43:06.169918   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:43:06.551482   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:43:06.586869   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:43:06.662665   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:43:07.051299   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:43:07.087971   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:43:07.164047   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:43:07.550407   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:43:07.586802   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:43:07.663199   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:43:08.051552   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:43:08.091047   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:43:08.169551   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:43:08.550940   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:43:08.588369   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:43:08.663015   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:43:09.051750   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:43:09.087308   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:43:09.434887   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:43:09.551256   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:43:09.586407   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:43:09.662925   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:43:10.050700   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:43:10.087645   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:43:10.163213   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:43:10.550755   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:43:10.587827   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:43:10.662858   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:43:11.071992   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:43:11.093439   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:43:11.162994   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:43:11.551011   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:43:11.588836   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:43:11.663309   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:43:12.058872   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:43:12.089901   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:43:12.163685   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:43:12.551928   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:43:12.587249   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:43:12.662222   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:43:13.050943   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:43:13.087001   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:43:13.163144   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:43:13.550390   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:43:13.586806   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:43:13.663245   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:43:14.050879   71244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:43:14.092038   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:43:14.163065   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:43:14.551067   71244 kapi.go:107] duration metric: took 1m15.058049148s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1206 18:43:14.587556   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:43:14.662333   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:43:15.090294   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:43:15.162940   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:43:15.587525   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:43:15.670552   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:43:16.093222   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:43:16.165566   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:43:16.595166   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:43:16.667336   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:43:17.088361   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:43:17.174132   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:43:17.587268   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:43:17.662503   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:43:18.086510   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:43:18.162881   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:43:18.589353   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:43:18.662783   71244 kapi.go:107] duration metric: took 1m15.568314496s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1206 18:43:18.664790   71244 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-463584 cluster.
	I1206 18:43:18.666311   71244 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1206 18:43:18.667767   71244 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1206 18:43:19.087568   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:43:19.588774   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:43:20.174234   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:43:20.587613   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:43:21.094750   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:43:21.587642   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:43:22.088714   71244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:43:22.588061   71244 kapi.go:107] duration metric: took 1m21.748404016s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1206 18:43:22.589987   71244 out.go:177] * Enabled addons: cloud-spanner, nvidia-device-plugin, ingress-dns, storage-provisioner, default-storageclass, helm-tiller, metrics-server, inspektor-gadget, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1206 18:43:22.591305   71244 addons.go:502] enable addons completed in 1m32.589890288s: enabled=[cloud-spanner nvidia-device-plugin ingress-dns storage-provisioner default-storageclass helm-tiller metrics-server inspektor-gadget storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1206 18:43:22.591349   71244 start.go:233] waiting for cluster config update ...
	I1206 18:43:22.591377   71244 start.go:242] writing updated cluster config ...
	I1206 18:43:22.591638   71244 ssh_runner.go:195] Run: rm -f paused
	I1206 18:43:22.643884   71244 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1206 18:43:22.645679   71244 out.go:177] * Done! kubectl is now configured to use "addons-463584" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-12-06 18:41:02 UTC, ends at Wed 2023-12-06 18:46:21 UTC. --
	Dec 06 18:46:21 addons-463584 crio[715]: time="2023-12-06 18:46:21.236620538Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701888381236605118,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:543771,},InodesUsed:&UInt64Value{Value:227,},},},}" file="go-grpc-middleware/chain.go:25" id=8dc4ef65-7366-4219-87d4-0a331e92cc9f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 18:46:21 addons-463584 crio[715]: time="2023-12-06 18:46:21.237505863Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a9bb886d-2eca-4904-914e-b79b897c3257 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 18:46:21 addons-463584 crio[715]: time="2023-12-06 18:46:21.237673429Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a9bb886d-2eca-4904-914e-b79b897c3257 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 18:46:21 addons-463584 crio[715]: time="2023-12-06 18:46:21.237992410Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8f9f93b1f1fe6d82864f59ae55153033633e0c5ddd532d59fac57899420b6ec1,PodSandboxId:0360410ef8325ca7c40e9b90ba82386cb8cf9db585aa641532f20bfb79779e12,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1701888372838175474,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-dgtqd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 30aea9b3-7f82-4c35-a865-5361bb32d2af,},Annotations:map[string]string{io.kubernetes.container.hash: 9f5b6a00,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:523ce9862e88c69c6b4b180d1a777432dc391acd64f1cf89b8cd0917028436ae,PodSandboxId:9b2824b9583b18198df0addd1caceec56248cf69d1446f826756084d82fb1fd6,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,State:CONTAINER_RUNNING,CreatedAt:1701888234855475833,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d83abb50-84f5-4145-8a0d-153f7205e73e,},Annotations:map[string]string{io.kubernet
es.container.hash: 79ddc210,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf24f56fb1a9313eae77502e794f15d62c153f745e5f57945520883828557ad5,PodSandboxId:193902853b2db760aa99856ca8769906c25a29603a8ce272187092e97425d0fd,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,State:CONTAINER_RUNNING,CreatedAt:1701888225009876941,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-777fd4b855-4mzzg,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 4a654f18-b61c-4682-9ab4-a722d11bc12e,},Annotations:map[string]string{io.kubernetes.container.hash: 49611788,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36ef2883756f102fb4a3ca7760115fc548fc36ef2f32691fb4c944ed3ccdfd6e,PodSandboxId:399034fbdcb1544d96094883f9f12ac61b3920837921a511a9f9a516e96b1cdf,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1701888197374036081,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-zhdjp,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: a257bc6d-f743-410b-a5ec-7d040ef09d78,},Annotations:map[string]string{io.kubernetes.container.hash: d1b9f936,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7efbd98eb6d3905fa3c0e6f56c59ab537730a326a9362ef5fc369b93301cfecf,PodSandboxId:a689d3c5b16709ca2d73f37ec1d782df3feeddda7c3f79593ac2b3b08c1856ed,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c59
65b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1701888181683441942,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-2kmxf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 26445108-ff59-451d-b784-b003244af934,},Annotations:map[string]string{io.kubernetes.container.hash: 23771731,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e720b7a24196808a234b831e088ffac76e06ec3f4d81c856f7eff841dfc93f9e,PodSandboxId:1f0e0901c63043401a7ec7f9d5f63f528c526d6b01fd0747d41144a045d63928,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certg
en@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1701888177159436280,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-t4xzd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 65a1c953-bb8b-4036-a3c2-04a722d1e615,},Annotations:map[string]string{io.kubernetes.container.hash: b82fae73,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97de121996c07f676f69835c743ff85a683dd45b3857ba4c665227ac4ad27829,PodSandboxId:94210341d036ebb3a108bde993137ef4509d345b2462fd0406a681018a9c0f5c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provi
sioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701888158207708121,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22df35a9-9f09-4a46-af08-ecc84d038638,},Annotations:map[string]string{io.kubernetes.container.hash: 2b1df823,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:464ea798a4e498a65b5658aecddfa649ab678bc7511cf6713b7c27df90551388,PodSandboxId:94210341d036ebb3a108bde993137ef4509d345b2462fd0406a681018a9c0f5c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provis
ioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1701888126887549519,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22df35a9-9f09-4a46-af08-ecc84d038638,},Annotations:map[string]string{io.kubernetes.container.hash: 2b1df823,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:578d24f42f5744f5b95637b5d684d327e3996315dbd00bdf2550265f90d6265d,PodSandboxId:51beec3e1c5053e89adc7a61ab118c8bdedec3e17d9638815098000f6d4596e0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed
1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701888126136620104,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tv776,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4aa92b0-8a65-46a1-bf5d-048065163dd7,},Annotations:map[string]string{io.kubernetes.container.hash: 3bafbd74,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3ed136a8f0ea64e0ec5be28bdd5f9d8216db8067974831250ba054d9ce5ac82,PodSandboxId:bd1cc776cf631ce65cd4aea7333d7184ef7fc41201db0cfe1535b7c3f84344c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ad
e8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701888117471988558,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zr82n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24c2879a-3680-449d-b5ee-693c29e7e488,},Annotations:map[string]string{io.kubernetes.container.hash: 7f573fe1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01e24192cc4fe48447c905abdd80f32b7ed0d1d3df845833c200a17744f8737a,PodSandboxId:6bcf0ab4df389a7873a88f87745b85a01a739467b4f1f5bc35d853e3a015dc5e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Im
age:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701888090810867566,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-463584,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0ddb51770711db0381c0565a2f42a65,},Annotations:map[string]string{io.kubernetes.container.hash: b8424952,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22abfe2488e46bf0c0186a270489e789ec244e5571e14f5f7266d21a6612d603,PodSandboxId:9fd3986e85b94a919bcfdcf10526e9f004c147ca8d89aff1fc91cd3741abec1b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b8
81d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701888090658642647,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-463584,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3adf4f828b5baa957e29181c2bc1838c,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d919ff61edd6a0d46d062dbfb049d867aa05d25355dc6e45bb1208d3b0f743f,PodSandboxId:9d15c03eb84cce43f4446588655e14e47000a450d3c2fc28428c3c38390feb06,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02
d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701888090272531934,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-463584,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef192055271b430717c6ef61b607e00a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:821f2d319062ee0290a2f4baa1da9eb2f87097a11c4aa3de75f64a9b1e673394,PodSandboxId:c5f14aa8cd37054fe50dfac52396b22f8f5ec0ff452e43648a0303af8e43bb2a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e3697
0370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701888090153412980,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-463584,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53b8e333302ca9711ad74186e2fc6a52,},Annotations:map[string]string{io.kubernetes.container.hash: 5548344a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a9bb886d-2eca-4904-914e-b79b897c3257 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 18:46:21 addons-463584 crio[715]: time="2023-12-06 18:46:21.276994285Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=b233f183-ea7c-4c21-b271-39ed2e043ebb name=/runtime.v1.RuntimeService/Version
	Dec 06 18:46:21 addons-463584 crio[715]: time="2023-12-06 18:46:21.277054789Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=b233f183-ea7c-4c21-b271-39ed2e043ebb name=/runtime.v1.RuntimeService/Version
	Dec 06 18:46:21 addons-463584 crio[715]: time="2023-12-06 18:46:21.278797950Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=9a46e5cb-4759-4d4e-97e1-350915373ea0 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 18:46:21 addons-463584 crio[715]: time="2023-12-06 18:46:21.280297923Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701888381280281209,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:543771,},InodesUsed:&UInt64Value{Value:227,},},},}" file="go-grpc-middleware/chain.go:25" id=9a46e5cb-4759-4d4e-97e1-350915373ea0 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 18:46:21 addons-463584 crio[715]: time="2023-12-06 18:46:21.280910536Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f933abf9-aa1c-42ad-b01c-d9ffdb90df8e name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 18:46:21 addons-463584 crio[715]: time="2023-12-06 18:46:21.280985647Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f933abf9-aa1c-42ad-b01c-d9ffdb90df8e name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 18:46:21 addons-463584 crio[715]: time="2023-12-06 18:46:21.281436163Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8f9f93b1f1fe6d82864f59ae55153033633e0c5ddd532d59fac57899420b6ec1,PodSandboxId:0360410ef8325ca7c40e9b90ba82386cb8cf9db585aa641532f20bfb79779e12,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1701888372838175474,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-dgtqd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 30aea9b3-7f82-4c35-a865-5361bb32d2af,},Annotations:map[string]string{io.kubernetes.container.hash: 9f5b6a00,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:523ce9862e88c69c6b4b180d1a777432dc391acd64f1cf89b8cd0917028436ae,PodSandboxId:9b2824b9583b18198df0addd1caceec56248cf69d1446f826756084d82fb1fd6,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,State:CONTAINER_RUNNING,CreatedAt:1701888234855475833,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d83abb50-84f5-4145-8a0d-153f7205e73e,},Annotations:map[string]string{io.kubernet
es.container.hash: 79ddc210,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf24f56fb1a9313eae77502e794f15d62c153f745e5f57945520883828557ad5,PodSandboxId:193902853b2db760aa99856ca8769906c25a29603a8ce272187092e97425d0fd,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,State:CONTAINER_RUNNING,CreatedAt:1701888225009876941,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-777fd4b855-4mzzg,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 4a654f18-b61c-4682-9ab4-a722d11bc12e,},Annotations:map[string]string{io.kubernetes.container.hash: 49611788,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36ef2883756f102fb4a3ca7760115fc548fc36ef2f32691fb4c944ed3ccdfd6e,PodSandboxId:399034fbdcb1544d96094883f9f12ac61b3920837921a511a9f9a516e96b1cdf,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1701888197374036081,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-zhdjp,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: a257bc6d-f743-410b-a5ec-7d040ef09d78,},Annotations:map[string]string{io.kubernetes.container.hash: d1b9f936,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7efbd98eb6d3905fa3c0e6f56c59ab537730a326a9362ef5fc369b93301cfecf,PodSandboxId:a689d3c5b16709ca2d73f37ec1d782df3feeddda7c3f79593ac2b3b08c1856ed,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c59
65b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1701888181683441942,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-2kmxf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 26445108-ff59-451d-b784-b003244af934,},Annotations:map[string]string{io.kubernetes.container.hash: 23771731,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e720b7a24196808a234b831e088ffac76e06ec3f4d81c856f7eff841dfc93f9e,PodSandboxId:1f0e0901c63043401a7ec7f9d5f63f528c526d6b01fd0747d41144a045d63928,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certg
en@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1701888177159436280,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-t4xzd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 65a1c953-bb8b-4036-a3c2-04a722d1e615,},Annotations:map[string]string{io.kubernetes.container.hash: b82fae73,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97de121996c07f676f69835c743ff85a683dd45b3857ba4c665227ac4ad27829,PodSandboxId:94210341d036ebb3a108bde993137ef4509d345b2462fd0406a681018a9c0f5c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provi
sioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701888158207708121,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22df35a9-9f09-4a46-af08-ecc84d038638,},Annotations:map[string]string{io.kubernetes.container.hash: 2b1df823,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:464ea798a4e498a65b5658aecddfa649ab678bc7511cf6713b7c27df90551388,PodSandboxId:94210341d036ebb3a108bde993137ef4509d345b2462fd0406a681018a9c0f5c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provis
ioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1701888126887549519,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22df35a9-9f09-4a46-af08-ecc84d038638,},Annotations:map[string]string{io.kubernetes.container.hash: 2b1df823,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:578d24f42f5744f5b95637b5d684d327e3996315dbd00bdf2550265f90d6265d,PodSandboxId:51beec3e1c5053e89adc7a61ab118c8bdedec3e17d9638815098000f6d4596e0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed
1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701888126136620104,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tv776,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4aa92b0-8a65-46a1-bf5d-048065163dd7,},Annotations:map[string]string{io.kubernetes.container.hash: 3bafbd74,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3ed136a8f0ea64e0ec5be28bdd5f9d8216db8067974831250ba054d9ce5ac82,PodSandboxId:bd1cc776cf631ce65cd4aea7333d7184ef7fc41201db0cfe1535b7c3f84344c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ad
e8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701888117471988558,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zr82n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24c2879a-3680-449d-b5ee-693c29e7e488,},Annotations:map[string]string{io.kubernetes.container.hash: 7f573fe1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01e24192cc4fe48447c905abdd80f32b7ed0d1d3df845833c200a17744f8737a,PodSandboxId:6bcf0ab4df389a7873a88f87745b85a01a739467b4f1f5bc35d853e3a015dc5e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Im
age:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701888090810867566,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-463584,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0ddb51770711db0381c0565a2f42a65,},Annotations:map[string]string{io.kubernetes.container.hash: b8424952,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22abfe2488e46bf0c0186a270489e789ec244e5571e14f5f7266d21a6612d603,PodSandboxId:9fd3986e85b94a919bcfdcf10526e9f004c147ca8d89aff1fc91cd3741abec1b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b8
81d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701888090658642647,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-463584,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3adf4f828b5baa957e29181c2bc1838c,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d919ff61edd6a0d46d062dbfb049d867aa05d25355dc6e45bb1208d3b0f743f,PodSandboxId:9d15c03eb84cce43f4446588655e14e47000a450d3c2fc28428c3c38390feb06,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02
d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701888090272531934,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-463584,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef192055271b430717c6ef61b607e00a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:821f2d319062ee0290a2f4baa1da9eb2f87097a11c4aa3de75f64a9b1e673394,PodSandboxId:c5f14aa8cd37054fe50dfac52396b22f8f5ec0ff452e43648a0303af8e43bb2a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e3697
0370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701888090153412980,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-463584,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53b8e333302ca9711ad74186e2fc6a52,},Annotations:map[string]string{io.kubernetes.container.hash: 5548344a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f933abf9-aa1c-42ad-b01c-d9ffdb90df8e name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 18:46:21 addons-463584 crio[715]: time="2023-12-06 18:46:21.316886766Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=51477986-244c-4bbd-bac7-daa4ef93d446 name=/runtime.v1.RuntimeService/Version
	Dec 06 18:46:21 addons-463584 crio[715]: time="2023-12-06 18:46:21.316981475Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=51477986-244c-4bbd-bac7-daa4ef93d446 name=/runtime.v1.RuntimeService/Version
	Dec 06 18:46:21 addons-463584 crio[715]: time="2023-12-06 18:46:21.318452947Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=ee344b6b-b94f-4afc-8551-56708198dbf7 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 18:46:21 addons-463584 crio[715]: time="2023-12-06 18:46:21.319650816Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701888381319634331,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:543771,},InodesUsed:&UInt64Value{Value:227,},},},}" file="go-grpc-middleware/chain.go:25" id=ee344b6b-b94f-4afc-8551-56708198dbf7 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 18:46:21 addons-463584 crio[715]: time="2023-12-06 18:46:21.320636682Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b9580359-58aa-480a-bd77-abc9154a45d5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 18:46:21 addons-463584 crio[715]: time="2023-12-06 18:46:21.320707309Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b9580359-58aa-480a-bd77-abc9154a45d5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 18:46:21 addons-463584 crio[715]: time="2023-12-06 18:46:21.321203956Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8f9f93b1f1fe6d82864f59ae55153033633e0c5ddd532d59fac57899420b6ec1,PodSandboxId:0360410ef8325ca7c40e9b90ba82386cb8cf9db585aa641532f20bfb79779e12,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1701888372838175474,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-dgtqd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 30aea9b3-7f82-4c35-a865-5361bb32d2af,},Annotations:map[string]string{io.kubernetes.container.hash: 9f5b6a00,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:523ce9862e88c69c6b4b180d1a777432dc391acd64f1cf89b8cd0917028436ae,PodSandboxId:9b2824b9583b18198df0addd1caceec56248cf69d1446f826756084d82fb1fd6,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,State:CONTAINER_RUNNING,CreatedAt:1701888234855475833,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d83abb50-84f5-4145-8a0d-153f7205e73e,},Annotations:map[string]string{io.kubernet
es.container.hash: 79ddc210,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf24f56fb1a9313eae77502e794f15d62c153f745e5f57945520883828557ad5,PodSandboxId:193902853b2db760aa99856ca8769906c25a29603a8ce272187092e97425d0fd,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,State:CONTAINER_RUNNING,CreatedAt:1701888225009876941,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-777fd4b855-4mzzg,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 4a654f18-b61c-4682-9ab4-a722d11bc12e,},Annotations:map[string]string{io.kubernetes.container.hash: 49611788,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36ef2883756f102fb4a3ca7760115fc548fc36ef2f32691fb4c944ed3ccdfd6e,PodSandboxId:399034fbdcb1544d96094883f9f12ac61b3920837921a511a9f9a516e96b1cdf,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1701888197374036081,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-zhdjp,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: a257bc6d-f743-410b-a5ec-7d040ef09d78,},Annotations:map[string]string{io.kubernetes.container.hash: d1b9f936,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7efbd98eb6d3905fa3c0e6f56c59ab537730a326a9362ef5fc369b93301cfecf,PodSandboxId:a689d3c5b16709ca2d73f37ec1d782df3feeddda7c3f79593ac2b3b08c1856ed,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c59
65b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1701888181683441942,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-2kmxf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 26445108-ff59-451d-b784-b003244af934,},Annotations:map[string]string{io.kubernetes.container.hash: 23771731,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e720b7a24196808a234b831e088ffac76e06ec3f4d81c856f7eff841dfc93f9e,PodSandboxId:1f0e0901c63043401a7ec7f9d5f63f528c526d6b01fd0747d41144a045d63928,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certg
en@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1701888177159436280,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-t4xzd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 65a1c953-bb8b-4036-a3c2-04a722d1e615,},Annotations:map[string]string{io.kubernetes.container.hash: b82fae73,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97de121996c07f676f69835c743ff85a683dd45b3857ba4c665227ac4ad27829,PodSandboxId:94210341d036ebb3a108bde993137ef4509d345b2462fd0406a681018a9c0f5c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provi
sioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701888158207708121,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22df35a9-9f09-4a46-af08-ecc84d038638,},Annotations:map[string]string{io.kubernetes.container.hash: 2b1df823,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:464ea798a4e498a65b5658aecddfa649ab678bc7511cf6713b7c27df90551388,PodSandboxId:94210341d036ebb3a108bde993137ef4509d345b2462fd0406a681018a9c0f5c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provis
ioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1701888126887549519,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22df35a9-9f09-4a46-af08-ecc84d038638,},Annotations:map[string]string{io.kubernetes.container.hash: 2b1df823,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:578d24f42f5744f5b95637b5d684d327e3996315dbd00bdf2550265f90d6265d,PodSandboxId:51beec3e1c5053e89adc7a61ab118c8bdedec3e17d9638815098000f6d4596e0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed
1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701888126136620104,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tv776,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4aa92b0-8a65-46a1-bf5d-048065163dd7,},Annotations:map[string]string{io.kubernetes.container.hash: 3bafbd74,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3ed136a8f0ea64e0ec5be28bdd5f9d8216db8067974831250ba054d9ce5ac82,PodSandboxId:bd1cc776cf631ce65cd4aea7333d7184ef7fc41201db0cfe1535b7c3f84344c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ad
e8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701888117471988558,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zr82n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24c2879a-3680-449d-b5ee-693c29e7e488,},Annotations:map[string]string{io.kubernetes.container.hash: 7f573fe1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01e24192cc4fe48447c905abdd80f32b7ed0d1d3df845833c200a17744f8737a,PodSandboxId:6bcf0ab4df389a7873a88f87745b85a01a739467b4f1f5bc35d853e3a015dc5e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Im
age:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701888090810867566,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-463584,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0ddb51770711db0381c0565a2f42a65,},Annotations:map[string]string{io.kubernetes.container.hash: b8424952,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22abfe2488e46bf0c0186a270489e789ec244e5571e14f5f7266d21a6612d603,PodSandboxId:9fd3986e85b94a919bcfdcf10526e9f004c147ca8d89aff1fc91cd3741abec1b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b8
81d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701888090658642647,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-463584,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3adf4f828b5baa957e29181c2bc1838c,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d919ff61edd6a0d46d062dbfb049d867aa05d25355dc6e45bb1208d3b0f743f,PodSandboxId:9d15c03eb84cce43f4446588655e14e47000a450d3c2fc28428c3c38390feb06,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02
d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701888090272531934,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-463584,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef192055271b430717c6ef61b607e00a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:821f2d319062ee0290a2f4baa1da9eb2f87097a11c4aa3de75f64a9b1e673394,PodSandboxId:c5f14aa8cd37054fe50dfac52396b22f8f5ec0ff452e43648a0303af8e43bb2a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e3697
0370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701888090153412980,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-463584,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53b8e333302ca9711ad74186e2fc6a52,},Annotations:map[string]string{io.kubernetes.container.hash: 5548344a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b9580359-58aa-480a-bd77-abc9154a45d5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 18:46:21 addons-463584 crio[715]: time="2023-12-06 18:46:21.365214810Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=7f0e5f62-32c1-4386-9d12-2b62167d94d3 name=/runtime.v1.RuntimeService/Version
	Dec 06 18:46:21 addons-463584 crio[715]: time="2023-12-06 18:46:21.365281773Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=7f0e5f62-32c1-4386-9d12-2b62167d94d3 name=/runtime.v1.RuntimeService/Version
	Dec 06 18:46:21 addons-463584 crio[715]: time="2023-12-06 18:46:21.366821595Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=dd65a9eb-2acd-484a-bed4-0bb4639e7c1e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 18:46:21 addons-463584 crio[715]: time="2023-12-06 18:46:21.368273999Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701888381368251675,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:543771,},InodesUsed:&UInt64Value{Value:227,},},},}" file="go-grpc-middleware/chain.go:25" id=dd65a9eb-2acd-484a-bed4-0bb4639e7c1e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 18:46:21 addons-463584 crio[715]: time="2023-12-06 18:46:21.369189914Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b282b64e-3b32-45e8-a343-81fd72f5f9bf name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 18:46:21 addons-463584 crio[715]: time="2023-12-06 18:46:21.369272448Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b282b64e-3b32-45e8-a343-81fd72f5f9bf name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 18:46:21 addons-463584 crio[715]: time="2023-12-06 18:46:21.369636850Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8f9f93b1f1fe6d82864f59ae55153033633e0c5ddd532d59fac57899420b6ec1,PodSandboxId:0360410ef8325ca7c40e9b90ba82386cb8cf9db585aa641532f20bfb79779e12,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1701888372838175474,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-dgtqd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 30aea9b3-7f82-4c35-a865-5361bb32d2af,},Annotations:map[string]string{io.kubernetes.container.hash: 9f5b6a00,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:523ce9862e88c69c6b4b180d1a777432dc391acd64f1cf89b8cd0917028436ae,PodSandboxId:9b2824b9583b18198df0addd1caceec56248cf69d1446f826756084d82fb1fd6,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,State:CONTAINER_RUNNING,CreatedAt:1701888234855475833,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d83abb50-84f5-4145-8a0d-153f7205e73e,},Annotations:map[string]string{io.kubernet
es.container.hash: 79ddc210,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf24f56fb1a9313eae77502e794f15d62c153f745e5f57945520883828557ad5,PodSandboxId:193902853b2db760aa99856ca8769906c25a29603a8ce272187092e97425d0fd,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,State:CONTAINER_RUNNING,CreatedAt:1701888225009876941,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-777fd4b855-4mzzg,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 4a654f18-b61c-4682-9ab4-a722d11bc12e,},Annotations:map[string]string{io.kubernetes.container.hash: 49611788,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36ef2883756f102fb4a3ca7760115fc548fc36ef2f32691fb4c944ed3ccdfd6e,PodSandboxId:399034fbdcb1544d96094883f9f12ac61b3920837921a511a9f9a516e96b1cdf,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1701888197374036081,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-zhdjp,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: a257bc6d-f743-410b-a5ec-7d040ef09d78,},Annotations:map[string]string{io.kubernetes.container.hash: d1b9f936,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7efbd98eb6d3905fa3c0e6f56c59ab537730a326a9362ef5fc369b93301cfecf,PodSandboxId:a689d3c5b16709ca2d73f37ec1d782df3feeddda7c3f79593ac2b3b08c1856ed,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c59
65b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1701888181683441942,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-2kmxf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 26445108-ff59-451d-b784-b003244af934,},Annotations:map[string]string{io.kubernetes.container.hash: 23771731,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e720b7a24196808a234b831e088ffac76e06ec3f4d81c856f7eff841dfc93f9e,PodSandboxId:1f0e0901c63043401a7ec7f9d5f63f528c526d6b01fd0747d41144a045d63928,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certg
en@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1701888177159436280,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-t4xzd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 65a1c953-bb8b-4036-a3c2-04a722d1e615,},Annotations:map[string]string{io.kubernetes.container.hash: b82fae73,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97de121996c07f676f69835c743ff85a683dd45b3857ba4c665227ac4ad27829,PodSandboxId:94210341d036ebb3a108bde993137ef4509d345b2462fd0406a681018a9c0f5c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provi
sioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701888158207708121,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22df35a9-9f09-4a46-af08-ecc84d038638,},Annotations:map[string]string{io.kubernetes.container.hash: 2b1df823,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:464ea798a4e498a65b5658aecddfa649ab678bc7511cf6713b7c27df90551388,PodSandboxId:94210341d036ebb3a108bde993137ef4509d345b2462fd0406a681018a9c0f5c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provis
ioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1701888126887549519,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22df35a9-9f09-4a46-af08-ecc84d038638,},Annotations:map[string]string{io.kubernetes.container.hash: 2b1df823,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:578d24f42f5744f5b95637b5d684d327e3996315dbd00bdf2550265f90d6265d,PodSandboxId:51beec3e1c5053e89adc7a61ab118c8bdedec3e17d9638815098000f6d4596e0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed
1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701888126136620104,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tv776,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4aa92b0-8a65-46a1-bf5d-048065163dd7,},Annotations:map[string]string{io.kubernetes.container.hash: 3bafbd74,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3ed136a8f0ea64e0ec5be28bdd5f9d8216db8067974831250ba054d9ce5ac82,PodSandboxId:bd1cc776cf631ce65cd4aea7333d7184ef7fc41201db0cfe1535b7c3f84344c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ad
e8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701888117471988558,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zr82n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24c2879a-3680-449d-b5ee-693c29e7e488,},Annotations:map[string]string{io.kubernetes.container.hash: 7f573fe1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01e24192cc4fe48447c905abdd80f32b7ed0d1d3df845833c200a17744f8737a,PodSandboxId:6bcf0ab4df389a7873a88f87745b85a01a739467b4f1f5bc35d853e3a015dc5e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Im
age:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701888090810867566,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-463584,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0ddb51770711db0381c0565a2f42a65,},Annotations:map[string]string{io.kubernetes.container.hash: b8424952,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22abfe2488e46bf0c0186a270489e789ec244e5571e14f5f7266d21a6612d603,PodSandboxId:9fd3986e85b94a919bcfdcf10526e9f004c147ca8d89aff1fc91cd3741abec1b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b8
81d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701888090658642647,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-463584,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3adf4f828b5baa957e29181c2bc1838c,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d919ff61edd6a0d46d062dbfb049d867aa05d25355dc6e45bb1208d3b0f743f,PodSandboxId:9d15c03eb84cce43f4446588655e14e47000a450d3c2fc28428c3c38390feb06,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02
d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701888090272531934,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-463584,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef192055271b430717c6ef61b607e00a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:821f2d319062ee0290a2f4baa1da9eb2f87097a11c4aa3de75f64a9b1e673394,PodSandboxId:c5f14aa8cd37054fe50dfac52396b22f8f5ec0ff452e43648a0303af8e43bb2a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e3697
0370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701888090153412980,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-463584,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53b8e333302ca9711ad74186e2fc6a52,},Annotations:map[string]string{io.kubernetes.container.hash: 5548344a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b282b64e-3b32-45e8-a343-81fd72f5f9bf name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8f9f93b1f1fe6       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      8 seconds ago       Running             hello-world-app           0                   0360410ef8325       hello-world-app-5d77478584-dgtqd
	523ce9862e88c       docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc                              2 minutes ago       Running             nginx                     0                   9b2824b9583b1       nginx
	cf24f56fb1a93       ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1                        2 minutes ago       Running             headlamp                  0                   193902853b2db       headlamp-777fd4b855-4mzzg
	36ef2883756f1       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                 3 minutes ago       Running             gcp-auth                  0                   399034fbdcb15       gcp-auth-d4c87556c-zhdjp
	7efbd98eb6d39       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385   3 minutes ago       Exited              patch                     0                   a689d3c5b1670       ingress-nginx-admission-patch-2kmxf
	e720b7a241968       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385   3 minutes ago       Exited              create                    0                   1f0e0901c6304       ingress-nginx-admission-create-t4xzd
	97de121996c07       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             3 minutes ago       Running             storage-provisioner       1                   94210341d036e       storage-provisioner
	464ea798a4e49       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Exited              storage-provisioner       0                   94210341d036e       storage-provisioner
	578d24f42f574       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                                             4 minutes ago       Running             kube-proxy                0                   51beec3e1c505       kube-proxy-tv776
	f3ed136a8f0ea       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             4 minutes ago       Running             coredns                   0                   bd1cc776cf631       coredns-5dd5756b68-zr82n
	01e24192cc4fe       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                             4 minutes ago       Running             etcd                      0                   6bcf0ab4df389       etcd-addons-463584
	22abfe2488e46       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                                             4 minutes ago       Running             kube-scheduler            0                   9fd3986e85b94       kube-scheduler-addons-463584
	3d919ff61edd6       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                                             4 minutes ago       Running             kube-controller-manager   0                   9d15c03eb84cc       kube-controller-manager-addons-463584
	821f2d319062e       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                                             4 minutes ago       Running             kube-apiserver            0                   c5f14aa8cd370       kube-apiserver-addons-463584
	
	* 
	* ==> coredns [f3ed136a8f0ea64e0ec5be28bdd5f9d8216db8067974831250ba054d9ce5ac82] <==
	* [INFO] 10.244.0.8:50811 - 51733 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000086347s
	[INFO] 10.244.0.8:41336 - 61404 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000134026s
	[INFO] 10.244.0.8:41336 - 20191 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000045704s
	[INFO] 10.244.0.8:38263 - 65388 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000043411s
	[INFO] 10.244.0.8:38263 - 58478 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000040706s
	[INFO] 10.244.0.8:34736 - 23627 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000096348s
	[INFO] 10.244.0.8:34736 - 55093 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000064616s
	[INFO] 10.244.0.8:44851 - 19772 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.005279343s
	[INFO] 10.244.0.8:44851 - 50489 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000088859s
	[INFO] 10.244.0.8:51767 - 18659 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000079208s
	[INFO] 10.244.0.8:51767 - 55008 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00023968s
	[INFO] 10.244.0.8:55612 - 51250 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000066596s
	[INFO] 10.244.0.8:55612 - 55856 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00030509s
	[INFO] 10.244.0.8:56404 - 52180 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000066812s
	[INFO] 10.244.0.8:56404 - 17111 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000030618s
	[INFO] 10.244.0.20:43515 - 44119 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000338191s
	[INFO] 10.244.0.20:37301 - 2646 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000257987s
	[INFO] 10.244.0.20:47002 - 28267 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.001265272s
	[INFO] 10.244.0.20:39681 - 60146 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.001557606s
	[INFO] 10.244.0.20:59753 - 41843 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000158215s
	[INFO] 10.244.0.20:35989 - 26250 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000071856s
	[INFO] 10.244.0.20:48710 - 65063 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001610255s
	[INFO] 10.244.0.20:57620 - 22875 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003988147s
	[INFO] 10.244.0.22:51581 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000294931s
	[INFO] 10.244.0.22:39771 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000154726s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-463584
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-463584
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=31a3600ce72029d920a55140bbc6d0705e357503
	                    minikube.k8s.io/name=addons-463584
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_06T18_41_37_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-463584
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 06 Dec 2023 18:41:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-463584
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 06 Dec 2023 18:46:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 06 Dec 2023 18:44:42 +0000   Wed, 06 Dec 2023 18:41:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 06 Dec 2023 18:44:42 +0000   Wed, 06 Dec 2023 18:41:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 06 Dec 2023 18:44:42 +0000   Wed, 06 Dec 2023 18:41:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 06 Dec 2023 18:44:42 +0000   Wed, 06 Dec 2023 18:41:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.94
	  Hostname:    addons-463584
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	System Info:
	  Machine ID:                 78717d270c0547d595aec8c3f5b90a7e
	  System UUID:                78717d27-0c05-47d5-95ae-c8c3f5b90a7e
	  Boot ID:                    996dc5bb-5ed2-4917-824a-1bf53bb9d7de
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-dgtqd         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m32s
	  gcp-auth                    gcp-auth-d4c87556c-zhdjp                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m19s
	  headlamp                    headlamp-777fd4b855-4mzzg                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m46s
	  kube-system                 coredns-5dd5756b68-zr82n                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m30s
	  kube-system                 etcd-addons-463584                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m43s
	  kube-system                 kube-apiserver-addons-463584             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m43s
	  kube-system                 kube-controller-manager-addons-463584    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m43s
	  kube-system                 kube-proxy-tv776                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m31s
	  kube-system                 kube-scheduler-addons-463584             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m43s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m13s                  kube-proxy       
	  Normal  Starting                 4m52s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m52s (x8 over 4m52s)  kubelet          Node addons-463584 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m52s (x8 over 4m52s)  kubelet          Node addons-463584 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m52s (x7 over 4m52s)  kubelet          Node addons-463584 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m44s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m44s                  kubelet          Node addons-463584 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m44s                  kubelet          Node addons-463584 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m44s                  kubelet          Node addons-463584 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m43s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m43s                  kubelet          Node addons-463584 status is now: NodeReady
	  Normal  RegisteredNode           4m32s                  node-controller  Node addons-463584 event: Registered Node addons-463584 in Controller
	
	* 
	* ==> dmesg <==
	* [  +4.450115] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Dec 6 18:41] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.153688] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.126323] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +11.953573] systemd-fstab-generator[641]: Ignoring "noauto" for root device
	[  +0.100233] systemd-fstab-generator[652]: Ignoring "noauto" for root device
	[  +0.139580] systemd-fstab-generator[665]: Ignoring "noauto" for root device
	[  +0.106438] systemd-fstab-generator[676]: Ignoring "noauto" for root device
	[  +0.208953] systemd-fstab-generator[700]: Ignoring "noauto" for root device
	[  +9.283822] systemd-fstab-generator[908]: Ignoring "noauto" for root device
	[  +8.749135] systemd-fstab-generator[1245]: Ignoring "noauto" for root device
	[ +20.319796] kauditd_printk_skb: 20 callbacks suppressed
	[Dec 6 18:42] kauditd_printk_skb: 39 callbacks suppressed
	[  +5.005531] kauditd_printk_skb: 14 callbacks suppressed
	[ +23.566429] kauditd_printk_skb: 18 callbacks suppressed
	[Dec 6 18:43] kauditd_printk_skb: 37 callbacks suppressed
	[ +18.937744] kauditd_printk_skb: 35 callbacks suppressed
	[  +5.489589] kauditd_printk_skb: 38 callbacks suppressed
	[ +11.317487] kauditd_printk_skb: 3 callbacks suppressed
	[Dec 6 18:44] kauditd_printk_skb: 7 callbacks suppressed
	[ +25.562436] kauditd_printk_skb: 12 callbacks suppressed
	[Dec 6 18:46] kauditd_printk_skb: 5 callbacks suppressed
	
	* 
	* ==> etcd [01e24192cc4fe48447c905abdd80f32b7ed0d1d3df845833c200a17744f8737a] <==
	* {"level":"info","ts":"2023-12-06T18:43:00.763316Z","caller":"traceutil/trace.go:171","msg":"trace[2025486728] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1022; }","duration":"105.833139ms","start":"2023-12-06T18:43:00.65747Z","end":"2023-12-06T18:43:00.763303Z","steps":["trace[2025486728] 'agreement among raft nodes before linearized reading'  (duration: 101.690005ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-06T18:43:00.758688Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"213.766618ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13774"}
	{"level":"info","ts":"2023-12-06T18:43:00.766693Z","caller":"traceutil/trace.go:171","msg":"trace[428231585] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1022; }","duration":"220.888549ms","start":"2023-12-06T18:43:00.544903Z","end":"2023-12-06T18:43:00.765792Z","steps":["trace[428231585] 'agreement among raft nodes before linearized reading'  (duration: 213.724784ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-06T18:43:09.395623Z","caller":"traceutil/trace.go:171","msg":"trace[1143865658] transaction","detail":"{read_only:false; response_revision:1085; number_of_response:1; }","duration":"248.596505ms","start":"2023-12-06T18:43:09.147001Z","end":"2023-12-06T18:43:09.395598Z","steps":["trace[1143865658] 'process raft request'  (duration: 248.448238ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-06T18:43:09.401291Z","caller":"traceutil/trace.go:171","msg":"trace[2145141614] linearizableReadLoop","detail":"{readStateIndex:1119; appliedIndex:1118; }","duration":"241.606306ms","start":"2023-12-06T18:43:09.15967Z","end":"2023-12-06T18:43:09.401277Z","steps":["trace[2145141614] 'read index received'  (duration: 236.338267ms)","trace[2145141614] 'applied index is now lower than readState.Index'  (duration: 5.26708ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-06T18:43:09.401473Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"241.782896ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10948"}
	{"level":"info","ts":"2023-12-06T18:43:09.401524Z","caller":"traceutil/trace.go:171","msg":"trace[700774221] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1086; }","duration":"241.864667ms","start":"2023-12-06T18:43:09.159651Z","end":"2023-12-06T18:43:09.401516Z","steps":["trace[700774221] 'agreement among raft nodes before linearized reading'  (duration: 241.72421ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-06T18:43:09.401837Z","caller":"traceutil/trace.go:171","msg":"trace[1292135890] transaction","detail":"{read_only:false; response_revision:1086; number_of_response:1; }","duration":"244.930469ms","start":"2023-12-06T18:43:09.156896Z","end":"2023-12-06T18:43:09.401827Z","steps":["trace[1292135890] 'process raft request'  (duration: 244.248892ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-06T18:43:20.164132Z","caller":"traceutil/trace.go:171","msg":"trace[2074766352] linearizableReadLoop","detail":"{readStateIndex:1181; appliedIndex:1180; }","duration":"313.650022ms","start":"2023-12-06T18:43:19.850462Z","end":"2023-12-06T18:43:20.164112Z","steps":["trace[2074766352] 'read index received'  (duration: 313.439774ms)","trace[2074766352] 'applied index is now lower than readState.Index'  (duration: 209.369µs)"],"step_count":2}
	{"level":"info","ts":"2023-12-06T18:43:20.164528Z","caller":"traceutil/trace.go:171","msg":"trace[771283254] transaction","detail":"{read_only:false; response_revision:1146; number_of_response:1; }","duration":"326.324306ms","start":"2023-12-06T18:43:19.838191Z","end":"2023-12-06T18:43:20.164515Z","steps":["trace[771283254] 'process raft request'  (duration: 325.763523ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-06T18:43:20.165803Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-06T18:43:19.838136Z","time spent":"327.589981ms","remote":"127.0.0.1:60460","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":541,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/addons-463584\" mod_revision:1089 > success:<request_put:<key:\"/registry/leases/kube-node-lease/addons-463584\" value_size:487 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/addons-463584\" > >"}
	{"level":"warn","ts":"2023-12-06T18:43:20.164652Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"314.220913ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2023-12-06T18:43:20.165984Z","caller":"traceutil/trace.go:171","msg":"trace[398286841] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1146; }","duration":"315.606503ms","start":"2023-12-06T18:43:19.850363Z","end":"2023-12-06T18:43:20.16597Z","steps":["trace[398286841] 'agreement among raft nodes before linearized reading'  (duration: 314.194926ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-06T18:43:20.166033Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-06T18:43:19.850351Z","time spent":"315.672603ms","remote":"127.0.0.1:60438","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1137,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"warn","ts":"2023-12-06T18:43:20.165361Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"258.082097ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-06T18:43:20.170466Z","caller":"traceutil/trace.go:171","msg":"trace[898106610] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1146; }","duration":"263.187649ms","start":"2023-12-06T18:43:19.907264Z","end":"2023-12-06T18:43:20.170452Z","steps":["trace[898106610] 'agreement among raft nodes before linearized reading'  (duration: 258.05279ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-06T18:43:20.166451Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.163233ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/validatingwebhookconfigurations/\" range_end:\"/registry/validatingwebhookconfigurations0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2023-12-06T18:43:20.170798Z","caller":"traceutil/trace.go:171","msg":"trace[124973370] range","detail":"{range_begin:/registry/validatingwebhookconfigurations/; range_end:/registry/validatingwebhookconfigurations0; response_count:0; response_revision:1146; }","duration":"127.50319ms","start":"2023-12-06T18:43:20.043275Z","end":"2023-12-06T18:43:20.170778Z","steps":["trace[124973370] 'agreement among raft nodes before linearized reading'  (duration: 123.145646ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-06T18:43:26.433936Z","caller":"traceutil/trace.go:171","msg":"trace[1226314406] linearizableReadLoop","detail":"{readStateIndex:1205; appliedIndex:1204; }","duration":"227.656515ms","start":"2023-12-06T18:43:26.206264Z","end":"2023-12-06T18:43:26.43392Z","steps":["trace[1226314406] 'read index received'  (duration: 227.446902ms)","trace[1226314406] 'applied index is now lower than readState.Index'  (duration: 157.356µs)"],"step_count":2}
	{"level":"warn","ts":"2023-12-06T18:43:26.434156Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"227.889177ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-463584\" ","response":"range_response_count:1 size:9049"}
	{"level":"info","ts":"2023-12-06T18:43:26.434212Z","caller":"traceutil/trace.go:171","msg":"trace[2114016942] range","detail":"{range_begin:/registry/minions/addons-463584; range_end:; response_count:1; response_revision:1168; }","duration":"227.934744ms","start":"2023-12-06T18:43:26.206244Z","end":"2023-12-06T18:43:26.434179Z","steps":["trace[2114016942] 'agreement among raft nodes before linearized reading'  (duration: 227.784114ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-06T18:43:26.434394Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"146.37441ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2023-12-06T18:43:26.434443Z","caller":"traceutil/trace.go:171","msg":"trace[1062901739] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1168; }","duration":"146.426603ms","start":"2023-12-06T18:43:26.28801Z","end":"2023-12-06T18:43:26.434437Z","steps":["trace[1062901739] 'agreement among raft nodes before linearized reading'  (duration: 146.353442ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-06T18:43:44.845947Z","caller":"traceutil/trace.go:171","msg":"trace[1325024519] transaction","detail":"{read_only:false; response_revision:1390; number_of_response:1; }","duration":"172.062032ms","start":"2023-12-06T18:43:44.673862Z","end":"2023-12-06T18:43:44.845924Z","steps":["trace[1325024519] 'process raft request'  (duration: 171.885401ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-06T18:44:41.384199Z","caller":"traceutil/trace.go:171","msg":"trace[1276453046] transaction","detail":"{read_only:false; response_revision:1750; number_of_response:1; }","duration":"166.6613ms","start":"2023-12-06T18:44:41.217514Z","end":"2023-12-06T18:44:41.384175Z","steps":["trace[1276453046] 'process raft request'  (duration: 166.483706ms)"],"step_count":1}
	
	* 
	* ==> gcp-auth [36ef2883756f102fb4a3ca7760115fc548fc36ef2f32691fb4c944ed3ccdfd6e] <==
	* 2023/12/06 18:43:17 GCP Auth Webhook started!
	2023/12/06 18:43:29 Ready to marshal response ...
	2023/12/06 18:43:29 Ready to write response ...
	2023/12/06 18:43:29 Ready to marshal response ...
	2023/12/06 18:43:29 Ready to write response ...
	2023/12/06 18:43:32 Ready to marshal response ...
	2023/12/06 18:43:32 Ready to write response ...
	2023/12/06 18:43:35 Ready to marshal response ...
	2023/12/06 18:43:35 Ready to write response ...
	2023/12/06 18:43:35 Ready to marshal response ...
	2023/12/06 18:43:35 Ready to write response ...
	2023/12/06 18:43:35 Ready to marshal response ...
	2023/12/06 18:43:35 Ready to write response ...
	2023/12/06 18:43:42 Ready to marshal response ...
	2023/12/06 18:43:42 Ready to write response ...
	2023/12/06 18:43:49 Ready to marshal response ...
	2023/12/06 18:43:49 Ready to write response ...
	2023/12/06 18:43:56 Ready to marshal response ...
	2023/12/06 18:43:56 Ready to write response ...
	2023/12/06 18:44:04 Ready to marshal response ...
	2023/12/06 18:44:04 Ready to write response ...
	2023/12/06 18:44:22 Ready to marshal response ...
	2023/12/06 18:44:22 Ready to write response ...
	2023/12/06 18:46:10 Ready to marshal response ...
	2023/12/06 18:46:10 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  18:46:21 up 5 min,  0 users,  load average: 0.70, 1.80, 0.95
	Linux addons-463584 5.10.57 #1 SMP Fri Dec 1 04:24:04 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [821f2d319062ee0290a2f4baa1da9eb2f87097a11c4aa3de75f64a9b1e673394] <==
	* W1206 18:43:44.690191       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1206 18:43:49.742193       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I1206 18:43:50.005835       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.106.187.183"}
	E1206 18:43:58.486993       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1206 18:44:17.748202       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1206 18:44:38.660145       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1206 18:44:38.660213       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1206 18:44:38.675165       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1206 18:44:38.675238       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1206 18:44:38.696026       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1206 18:44:38.696188       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1206 18:44:38.751185       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1206 18:44:38.751513       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1206 18:44:38.759690       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1206 18:44:38.759787       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1206 18:44:38.777302       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1206 18:44:38.777404       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1206 18:44:38.800656       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1206 18:44:38.800758       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1206 18:44:38.830024       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1206 18:44:38.831277       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1206 18:44:39.777389       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1206 18:44:39.830285       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1206 18:44:39.838456       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1206 18:46:10.871780       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.100.60.178"}
	
	* 
	* ==> kube-controller-manager [3d919ff61edd6a0d46d062dbfb049d867aa05d25355dc6e45bb1208d3b0f743f] <==
	* W1206 18:45:19.463381       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1206 18:45:19.463525       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1206 18:45:25.074669       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1206 18:45:25.074779       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1206 18:45:36.239342       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1206 18:45:36.239369       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1206 18:45:53.159908       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1206 18:45:53.159990       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1206 18:46:00.037769       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1206 18:46:00.037824       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1206 18:46:05.747997       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1206 18:46:05.748206       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1206 18:46:10.613149       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I1206 18:46:10.664487       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-dgtqd"
	I1206 18:46:10.676054       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="63.888361ms"
	I1206 18:46:10.697290       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="21.040314ms"
	I1206 18:46:10.697557       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="138.951µs"
	I1206 18:46:10.710727       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="64.195µs"
	W1206 18:46:11.075542       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1206 18:46:11.075751       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1206 18:46:13.192859       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="16.932409ms"
	I1206 18:46:13.193278       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="216.887µs"
	I1206 18:46:13.362942       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7c6974c4d8" duration="4.231µs"
	I1206 18:46:13.366740       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I1206 18:46:13.381759       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	
	* 
	* ==> kube-proxy [578d24f42f5744f5b95637b5d684d327e3996315dbd00bdf2550265f90d6265d] <==
	* I1206 18:42:07.494214       1 server_others.go:69] "Using iptables proxy"
	I1206 18:42:07.509373       1 node.go:141] Successfully retrieved node IP: 192.168.39.94
	I1206 18:42:07.963053       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1206 18:42:07.963166       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1206 18:42:08.061827       1 server_others.go:152] "Using iptables Proxier"
	I1206 18:42:08.061898       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1206 18:42:08.085285       1 server.go:846] "Version info" version="v1.28.4"
	I1206 18:42:08.085340       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 18:42:08.113550       1 config.go:188] "Starting service config controller"
	I1206 18:42:08.114325       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1206 18:42:08.114366       1 config.go:97] "Starting endpoint slice config controller"
	I1206 18:42:08.114370       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1206 18:42:08.180694       1 config.go:315] "Starting node config controller"
	I1206 18:42:08.180732       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1206 18:42:08.229361       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1206 18:42:08.229628       1 shared_informer.go:318] Caches are synced for service config
	I1206 18:42:08.288916       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [22abfe2488e46bf0c0186a270489e789ec244e5571e14f5f7266d21a6612d603] <==
	* W1206 18:41:34.509747       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1206 18:41:34.509755       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1206 18:41:34.509786       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1206 18:41:34.509794       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1206 18:41:35.371194       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1206 18:41:35.371285       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1206 18:41:35.396411       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1206 18:41:35.396467       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1206 18:41:35.524201       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1206 18:41:35.524281       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1206 18:41:35.576974       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1206 18:41:35.576998       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1206 18:41:35.650705       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1206 18:41:35.650756       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1206 18:41:35.654755       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1206 18:41:35.654803       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1206 18:41:35.658882       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1206 18:41:35.658914       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1206 18:41:35.720983       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1206 18:41:35.721042       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1206 18:41:35.751599       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1206 18:41:35.751673       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1206 18:41:35.932992       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1206 18:41:35.933678       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1206 18:41:38.883606       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-12-06 18:41:02 UTC, ends at Wed 2023-12-06 18:46:22 UTC. --
	Dec 06 18:46:10 addons-463584 kubelet[1252]: I1206 18:46:10.684302    1252 memory_manager.go:346] "RemoveStaleState removing state" podUID="adaad5af-f54f-4af8-b891-02e38cdb1b38" containerName="csi-snapshotter"
	Dec 06 18:46:10 addons-463584 kubelet[1252]: I1206 18:46:10.840995    1252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpgf9\" (UniqueName: \"kubernetes.io/projected/30aea9b3-7f82-4c35-a865-5361bb32d2af-kube-api-access-cpgf9\") pod \"hello-world-app-5d77478584-dgtqd\" (UID: \"30aea9b3-7f82-4c35-a865-5361bb32d2af\") " pod="default/hello-world-app-5d77478584-dgtqd"
	Dec 06 18:46:10 addons-463584 kubelet[1252]: I1206 18:46:10.841148    1252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/30aea9b3-7f82-4c35-a865-5361bb32d2af-gcp-creds\") pod \"hello-world-app-5d77478584-dgtqd\" (UID: \"30aea9b3-7f82-4c35-a865-5361bb32d2af\") " pod="default/hello-world-app-5d77478584-dgtqd"
	Dec 06 18:46:12 addons-463584 kubelet[1252]: I1206 18:46:12.144839    1252 scope.go:117] "RemoveContainer" containerID="9a443edb5e19b529f6baede93b521f5b883afb12c9ebebaf60f17f34670d2f62"
	Dec 06 18:46:12 addons-463584 kubelet[1252]: I1206 18:46:12.150833    1252 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lt5g6\" (UniqueName: \"kubernetes.io/projected/4dc0de3f-4f81-4871-bad8-1895e8cc7190-kube-api-access-lt5g6\") pod \"4dc0de3f-4f81-4871-bad8-1895e8cc7190\" (UID: \"4dc0de3f-4f81-4871-bad8-1895e8cc7190\") "
	Dec 06 18:46:12 addons-463584 kubelet[1252]: I1206 18:46:12.163962    1252 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dc0de3f-4f81-4871-bad8-1895e8cc7190-kube-api-access-lt5g6" (OuterVolumeSpecName: "kube-api-access-lt5g6") pod "4dc0de3f-4f81-4871-bad8-1895e8cc7190" (UID: "4dc0de3f-4f81-4871-bad8-1895e8cc7190"). InnerVolumeSpecName "kube-api-access-lt5g6". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Dec 06 18:46:12 addons-463584 kubelet[1252]: I1206 18:46:12.174145    1252 scope.go:117] "RemoveContainer" containerID="9a443edb5e19b529f6baede93b521f5b883afb12c9ebebaf60f17f34670d2f62"
	Dec 06 18:46:12 addons-463584 kubelet[1252]: E1206 18:46:12.174702    1252 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a443edb5e19b529f6baede93b521f5b883afb12c9ebebaf60f17f34670d2f62\": container with ID starting with 9a443edb5e19b529f6baede93b521f5b883afb12c9ebebaf60f17f34670d2f62 not found: ID does not exist" containerID="9a443edb5e19b529f6baede93b521f5b883afb12c9ebebaf60f17f34670d2f62"
	Dec 06 18:46:12 addons-463584 kubelet[1252]: I1206 18:46:12.174773    1252 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a443edb5e19b529f6baede93b521f5b883afb12c9ebebaf60f17f34670d2f62"} err="failed to get container status \"9a443edb5e19b529f6baede93b521f5b883afb12c9ebebaf60f17f34670d2f62\": rpc error: code = NotFound desc = could not find container \"9a443edb5e19b529f6baede93b521f5b883afb12c9ebebaf60f17f34670d2f62\": container with ID starting with 9a443edb5e19b529f6baede93b521f5b883afb12c9ebebaf60f17f34670d2f62 not found: ID does not exist"
	Dec 06 18:46:12 addons-463584 kubelet[1252]: I1206 18:46:12.252139    1252 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-lt5g6\" (UniqueName: \"kubernetes.io/projected/4dc0de3f-4f81-4871-bad8-1895e8cc7190-kube-api-access-lt5g6\") on node \"addons-463584\" DevicePath \"\""
	Dec 06 18:46:13 addons-463584 kubelet[1252]: I1206 18:46:13.175195    1252 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/hello-world-app-5d77478584-dgtqd" podStartSLOduration=2.261873383 podCreationTimestamp="2023-12-06 18:46:10 +0000 UTC" firstStartedPulling="2023-12-06 18:46:11.895666567 +0000 UTC m=+274.269194339" lastFinishedPulling="2023-12-06 18:46:12.808842626 +0000 UTC m=+275.182370388" observedRunningTime="2023-12-06 18:46:13.173886013 +0000 UTC m=+275.547413793" watchObservedRunningTime="2023-12-06 18:46:13.175049432 +0000 UTC m=+275.548577212"
	Dec 06 18:46:13 addons-463584 kubelet[1252]: I1206 18:46:13.864256    1252 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="26445108-ff59-451d-b784-b003244af934" path="/var/lib/kubelet/pods/26445108-ff59-451d-b784-b003244af934/volumes"
	Dec 06 18:46:13 addons-463584 kubelet[1252]: I1206 18:46:13.864786    1252 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="4dc0de3f-4f81-4871-bad8-1895e8cc7190" path="/var/lib/kubelet/pods/4dc0de3f-4f81-4871-bad8-1895e8cc7190/volumes"
	Dec 06 18:46:13 addons-463584 kubelet[1252]: I1206 18:46:13.865298    1252 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="65a1c953-bb8b-4036-a3c2-04a722d1e615" path="/var/lib/kubelet/pods/65a1c953-bb8b-4036-a3c2-04a722d1e615/volumes"
	Dec 06 18:46:16 addons-463584 kubelet[1252]: I1206 18:46:16.681568    1252 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9063f5cc-3a55-4c59-bf10-fcb58fa38534-webhook-cert\") pod \"9063f5cc-3a55-4c59-bf10-fcb58fa38534\" (UID: \"9063f5cc-3a55-4c59-bf10-fcb58fa38534\") "
	Dec 06 18:46:16 addons-463584 kubelet[1252]: I1206 18:46:16.681618    1252 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wmfp\" (UniqueName: \"kubernetes.io/projected/9063f5cc-3a55-4c59-bf10-fcb58fa38534-kube-api-access-7wmfp\") pod \"9063f5cc-3a55-4c59-bf10-fcb58fa38534\" (UID: \"9063f5cc-3a55-4c59-bf10-fcb58fa38534\") "
	Dec 06 18:46:16 addons-463584 kubelet[1252]: I1206 18:46:16.687187    1252 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9063f5cc-3a55-4c59-bf10-fcb58fa38534-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "9063f5cc-3a55-4c59-bf10-fcb58fa38534" (UID: "9063f5cc-3a55-4c59-bf10-fcb58fa38534"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 06 18:46:16 addons-463584 kubelet[1252]: I1206 18:46:16.688204    1252 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9063f5cc-3a55-4c59-bf10-fcb58fa38534-kube-api-access-7wmfp" (OuterVolumeSpecName: "kube-api-access-7wmfp") pod "9063f5cc-3a55-4c59-bf10-fcb58fa38534" (UID: "9063f5cc-3a55-4c59-bf10-fcb58fa38534"). InnerVolumeSpecName "kube-api-access-7wmfp". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Dec 06 18:46:16 addons-463584 kubelet[1252]: I1206 18:46:16.781869    1252 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9063f5cc-3a55-4c59-bf10-fcb58fa38534-webhook-cert\") on node \"addons-463584\" DevicePath \"\""
	Dec 06 18:46:16 addons-463584 kubelet[1252]: I1206 18:46:16.781905    1252 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-7wmfp\" (UniqueName: \"kubernetes.io/projected/9063f5cc-3a55-4c59-bf10-fcb58fa38534-kube-api-access-7wmfp\") on node \"addons-463584\" DevicePath \"\""
	Dec 06 18:46:17 addons-463584 kubelet[1252]: I1206 18:46:17.182504    1252 scope.go:117] "RemoveContainer" containerID="9599688b639f2c1ea55bb909e0d511170aa7b6267755a67c8daf2a4a66cb1104"
	Dec 06 18:46:17 addons-463584 kubelet[1252]: I1206 18:46:17.219007    1252 scope.go:117] "RemoveContainer" containerID="9599688b639f2c1ea55bb909e0d511170aa7b6267755a67c8daf2a4a66cb1104"
	Dec 06 18:46:17 addons-463584 kubelet[1252]: E1206 18:46:17.219616    1252 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9599688b639f2c1ea55bb909e0d511170aa7b6267755a67c8daf2a4a66cb1104\": container with ID starting with 9599688b639f2c1ea55bb909e0d511170aa7b6267755a67c8daf2a4a66cb1104 not found: ID does not exist" containerID="9599688b639f2c1ea55bb909e0d511170aa7b6267755a67c8daf2a4a66cb1104"
	Dec 06 18:46:17 addons-463584 kubelet[1252]: I1206 18:46:17.219659    1252 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9599688b639f2c1ea55bb909e0d511170aa7b6267755a67c8daf2a4a66cb1104"} err="failed to get container status \"9599688b639f2c1ea55bb909e0d511170aa7b6267755a67c8daf2a4a66cb1104\": rpc error: code = NotFound desc = could not find container \"9599688b639f2c1ea55bb909e0d511170aa7b6267755a67c8daf2a4a66cb1104\": container with ID starting with 9599688b639f2c1ea55bb909e0d511170aa7b6267755a67c8daf2a4a66cb1104 not found: ID does not exist"
	Dec 06 18:46:17 addons-463584 kubelet[1252]: I1206 18:46:17.864940    1252 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="9063f5cc-3a55-4c59-bf10-fcb58fa38534" path="/var/lib/kubelet/pods/9063f5cc-3a55-4c59-bf10-fcb58fa38534/volumes"
	
	* 
	* ==> storage-provisioner [464ea798a4e498a65b5658aecddfa649ab678bc7511cf6713b7c27df90551388] <==
	* I1206 18:42:07.845219       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1206 18:42:37.955204       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	* 
	* ==> storage-provisioner [97de121996c07f676f69835c743ff85a683dd45b3857ba4c665227ac4ad27829] <==
	* I1206 18:42:38.417910       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1206 18:42:38.453630       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1206 18:42:38.453804       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1206 18:42:38.465414       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1206 18:42:38.468265       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2376a0f1-ff09-4303-9d25-a03099b524a6", APIVersion:"v1", ResourceVersion:"942", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-463584_6eb29937-83ec-4577-a232-3148594daeb4 became leader
	I1206 18:42:38.468701       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-463584_6eb29937-83ec-4577-a232-3148594daeb4!
	I1206 18:42:38.569536       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-463584_6eb29937-83ec-4577-a232-3148594daeb4!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-463584 -n addons-463584
helpers_test.go:261: (dbg) Run:  kubectl --context addons-463584 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (153.73s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (155.02s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-463584
addons_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-463584: exit status 82 (2m1.00398994s)

                                                
                                                
-- stdout --
	* Stopping node "addons-463584"  ...
	* Stopping node "addons-463584"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:173: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-463584" : exit status 82
addons_test.go:175: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-463584
addons_test.go:175: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-463584: exit status 11 (21.724876466s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.94:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:177: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-463584" : exit status 11
addons_test.go:179: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-463584
addons_test.go:179: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-463584: exit status 11 (6.142637768s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.94:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:181: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-463584" : exit status 11
addons_test.go:184: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-463584
addons_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-463584: exit status 11 (6.14456843s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.94:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:186: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-463584" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (155.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-317483
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 image load --daemon gcr.io/google-containers/addon-resizer:functional-317483 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-317483 image load --daemon gcr.io/google-containers/addon-resizer:functional-317483 --alsologtostderr: (4.463415791s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-317483" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 image save gcr.io/google-containers/addon-resizer:functional-317483 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-317483 image save gcr.io/google-containers/addon-resizer:functional-317483 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.379370137s)
functional_test.go:385: expected "/home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:410: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1206 18:53:12.097591   78209 out.go:296] Setting OutFile to fd 1 ...
	I1206 18:53:12.097783   78209 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 18:53:12.097793   78209 out.go:309] Setting ErrFile to fd 2...
	I1206 18:53:12.097798   78209 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 18:53:12.097987   78209 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17740-63652/.minikube/bin
	I1206 18:53:12.098551   78209 config.go:182] Loaded profile config "functional-317483": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 18:53:12.098656   78209 config.go:182] Loaded profile config "functional-317483": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 18:53:12.099014   78209 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 18:53:12.099057   78209 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 18:53:12.113679   78209 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36519
	I1206 18:53:12.114158   78209 main.go:141] libmachine: () Calling .GetVersion
	I1206 18:53:12.114798   78209 main.go:141] libmachine: Using API Version  1
	I1206 18:53:12.114833   78209 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 18:53:12.115238   78209 main.go:141] libmachine: () Calling .GetMachineName
	I1206 18:53:12.115439   78209 main.go:141] libmachine: (functional-317483) Calling .GetState
	I1206 18:53:12.117755   78209 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 18:53:12.117814   78209 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 18:53:12.132680   78209 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44363
	I1206 18:53:12.133154   78209 main.go:141] libmachine: () Calling .GetVersion
	I1206 18:53:12.133725   78209 main.go:141] libmachine: Using API Version  1
	I1206 18:53:12.133758   78209 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 18:53:12.134145   78209 main.go:141] libmachine: () Calling .GetMachineName
	I1206 18:53:12.134394   78209 main.go:141] libmachine: (functional-317483) Calling .DriverName
	I1206 18:53:12.134721   78209 ssh_runner.go:195] Run: systemctl --version
	I1206 18:53:12.134744   78209 main.go:141] libmachine: (functional-317483) Calling .GetSSHHostname
	I1206 18:53:12.138150   78209 main.go:141] libmachine: (functional-317483) DBG | domain functional-317483 has defined MAC address 52:54:00:f5:af:b4 in network mk-functional-317483
	I1206 18:53:12.138560   78209 main.go:141] libmachine: (functional-317483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:af:b4", ip: ""} in network mk-functional-317483: {Iface:virbr1 ExpiryTime:2023-12-06 19:50:27 +0000 UTC Type:0 Mac:52:54:00:f5:af:b4 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:functional-317483 Clientid:01:52:54:00:f5:af:b4}
	I1206 18:53:12.138598   78209 main.go:141] libmachine: (functional-317483) DBG | domain functional-317483 has defined IP address 192.168.39.65 and MAC address 52:54:00:f5:af:b4 in network mk-functional-317483
	I1206 18:53:12.138785   78209 main.go:141] libmachine: (functional-317483) Calling .GetSSHPort
	I1206 18:53:12.138972   78209 main.go:141] libmachine: (functional-317483) Calling .GetSSHKeyPath
	I1206 18:53:12.139118   78209 main.go:141] libmachine: (functional-317483) Calling .GetSSHUsername
	I1206 18:53:12.139266   78209 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/functional-317483/id_rsa Username:docker}
	I1206 18:53:12.307608   78209 cache_images.go:286] Loading image from: /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar
	W1206 18:53:12.307706   78209 cache_images.go:254] Failed to load cached images for profile functional-317483. make sure the profile is running. loading images: stat /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar: no such file or directory
	I1206 18:53:12.307740   78209 cache_images.go:262] succeeded pushing to: 
	I1206 18:53:12.307750   78209 cache_images.go:263] failed pushing to: functional-317483
	I1206 18:53:12.307777   78209 main.go:141] libmachine: Making call to close driver server
	I1206 18:53:12.307794   78209 main.go:141] libmachine: (functional-317483) Calling .Close
	I1206 18:53:12.308061   78209 main.go:141] libmachine: Successfully made call to close driver server
	I1206 18:53:12.308078   78209 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 18:53:12.308090   78209 main.go:141] libmachine: Making call to close driver server
	I1206 18:53:12.308104   78209 main.go:141] libmachine: (functional-317483) Calling .Close
	I1206 18:53:12.308361   78209 main.go:141] libmachine: Successfully made call to close driver server
	I1206 18:53:12.308377   78209 main.go:141] libmachine: (functional-317483) DBG | Closing plugin on server side
	I1206 18:53:12.308379   78209 main.go:141] libmachine: Making call to close connection to plugin binary

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.29s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (164.89s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-283223 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:206: (dbg) Done: kubectl --context ingress-addon-legacy-283223 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (8.852843662s)
addons_test.go:231: (dbg) Run:  kubectl --context ingress-addon-legacy-283223 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context ingress-addon-legacy-283223 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [9e94db7c-b3d5-43a7-87d6-7d33def921e1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [9e94db7c-b3d5-43a7-87d6-7d33def921e1] Running
E1206 18:56:06.501349   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/addons-463584/client.crt: no such file or directory
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 11.016713574s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-283223 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E1206 18:57:54.633018   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/functional-317483/client.crt: no such file or directory
E1206 18:57:54.638310   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/functional-317483/client.crt: no such file or directory
E1206 18:57:54.648598   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/functional-317483/client.crt: no such file or directory
E1206 18:57:54.668886   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/functional-317483/client.crt: no such file or directory
E1206 18:57:54.709250   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/functional-317483/client.crt: no such file or directory
E1206 18:57:54.789638   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/functional-317483/client.crt: no such file or directory
E1206 18:57:54.950122   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/functional-317483/client.crt: no such file or directory
E1206 18:57:55.270845   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/functional-317483/client.crt: no such file or directory
E1206 18:57:55.911796   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/functional-317483/client.crt: no such file or directory
E1206 18:57:57.192548   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/functional-317483/client.crt: no such file or directory
E1206 18:57:59.754357   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/functional-317483/client.crt: no such file or directory
E1206 18:58:04.874681   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/functional-317483/client.crt: no such file or directory
E1206 18:58:15.115595   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/functional-317483/client.crt: no such file or directory
addons_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-283223 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.546689423s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:277: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:285: (dbg) Run:  kubectl --context ingress-addon-legacy-283223 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-283223 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.39.55
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-283223 addons disable ingress-dns --alsologtostderr -v=1
E1206 18:58:22.657447   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/addons-463584/client.crt: no such file or directory
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-283223 addons disable ingress-dns --alsologtostderr -v=1: (4.830601248s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-283223 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-283223 addons disable ingress --alsologtostderr -v=1: (7.591908625s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-283223 -n ingress-addon-legacy-283223
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-283223 logs -n 25
E1206 18:58:35.596214   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/functional-317483/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-283223 logs -n 25: (1.270933754s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                  Args                                  |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| mount          | -p functional-317483                                                   | functional-317483           | jenkins | v1.32.0 | 06 Dec 23 18:53 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup3408553415/001:/mount3 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| mount          | -p functional-317483                                                   | functional-317483           | jenkins | v1.32.0 | 06 Dec 23 18:53 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup3408553415/001:/mount1 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| ssh            | functional-317483 ssh findmnt                                          | functional-317483           | jenkins | v1.32.0 | 06 Dec 23 18:53 UTC |                     |
	|                | -T /mount1                                                             |                             |         |         |                     |                     |
	| ssh            | functional-317483 ssh findmnt                                          | functional-317483           | jenkins | v1.32.0 | 06 Dec 23 18:53 UTC | 06 Dec 23 18:53 UTC |
	|                | -T /mount1                                                             |                             |         |         |                     |                     |
	| ssh            | functional-317483 ssh findmnt                                          | functional-317483           | jenkins | v1.32.0 | 06 Dec 23 18:53 UTC | 06 Dec 23 18:53 UTC |
	|                | -T /mount2                                                             |                             |         |         |                     |                     |
	| ssh            | functional-317483 ssh findmnt                                          | functional-317483           | jenkins | v1.32.0 | 06 Dec 23 18:53 UTC | 06 Dec 23 18:53 UTC |
	|                | -T /mount3                                                             |                             |         |         |                     |                     |
	| mount          | -p functional-317483                                                   | functional-317483           | jenkins | v1.32.0 | 06 Dec 23 18:53 UTC |                     |
	|                | --kill=true                                                            |                             |         |         |                     |                     |
	| update-context | functional-317483                                                      | functional-317483           | jenkins | v1.32.0 | 06 Dec 23 18:53 UTC | 06 Dec 23 18:53 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-317483                                                      | functional-317483           | jenkins | v1.32.0 | 06 Dec 23 18:53 UTC | 06 Dec 23 18:53 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-317483                                                      | functional-317483           | jenkins | v1.32.0 | 06 Dec 23 18:53 UTC | 06 Dec 23 18:53 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| image          | functional-317483                                                      | functional-317483           | jenkins | v1.32.0 | 06 Dec 23 18:53 UTC | 06 Dec 23 18:53 UTC |
	|                | image ls --format short                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-317483                                                      | functional-317483           | jenkins | v1.32.0 | 06 Dec 23 18:53 UTC | 06 Dec 23 18:53 UTC |
	|                | image ls --format yaml                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh            | functional-317483 ssh pgrep                                            | functional-317483           | jenkins | v1.32.0 | 06 Dec 23 18:53 UTC |                     |
	|                | buildkitd                                                              |                             |         |         |                     |                     |
	| image          | functional-317483 image build -t                                       | functional-317483           | jenkins | v1.32.0 | 06 Dec 23 18:53 UTC | 06 Dec 23 18:53 UTC |
	|                | localhost/my-image:functional-317483                                   |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                       |                             |         |         |                     |                     |
	| image          | functional-317483                                                      | functional-317483           | jenkins | v1.32.0 | 06 Dec 23 18:53 UTC | 06 Dec 23 18:53 UTC |
	|                | image ls --format json                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-317483                                                      | functional-317483           | jenkins | v1.32.0 | 06 Dec 23 18:53 UTC | 06 Dec 23 18:53 UTC |
	|                | image ls --format table                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-317483 image ls                                             | functional-317483           | jenkins | v1.32.0 | 06 Dec 23 18:53 UTC | 06 Dec 23 18:53 UTC |
	| delete         | -p functional-317483                                                   | functional-317483           | jenkins | v1.32.0 | 06 Dec 23 18:53 UTC | 06 Dec 23 18:53 UTC |
	| start          | -p ingress-addon-legacy-283223                                         | ingress-addon-legacy-283223 | jenkins | v1.32.0 | 06 Dec 23 18:53 UTC | 06 Dec 23 18:55 UTC |
	|                | --kubernetes-version=v1.18.20                                          |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	|                | -v=5 --driver=kvm2                                                     |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-283223                                            | ingress-addon-legacy-283223 | jenkins | v1.32.0 | 06 Dec 23 18:55 UTC | 06 Dec 23 18:55 UTC |
	|                | addons enable ingress                                                  |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-283223                                            | ingress-addon-legacy-283223 | jenkins | v1.32.0 | 06 Dec 23 18:55 UTC | 06 Dec 23 18:55 UTC |
	|                | addons enable ingress-dns                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-283223                                            | ingress-addon-legacy-283223 | jenkins | v1.32.0 | 06 Dec 23 18:56 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/                                          |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'                                           |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-283223 ip                                         | ingress-addon-legacy-283223 | jenkins | v1.32.0 | 06 Dec 23 18:58 UTC | 06 Dec 23 18:58 UTC |
	| addons         | ingress-addon-legacy-283223                                            | ingress-addon-legacy-283223 | jenkins | v1.32.0 | 06 Dec 23 18:58 UTC | 06 Dec 23 18:58 UTC |
	|                | addons disable ingress-dns                                             |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-283223                                            | ingress-addon-legacy-283223 | jenkins | v1.32.0 | 06 Dec 23 18:58 UTC | 06 Dec 23 18:58 UTC |
	|                | addons disable ingress                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/06 18:53:46
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 18:53:46.062029   79322 out.go:296] Setting OutFile to fd 1 ...
	I1206 18:53:46.062171   79322 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 18:53:46.062178   79322 out.go:309] Setting ErrFile to fd 2...
	I1206 18:53:46.062186   79322 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 18:53:46.062394   79322 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17740-63652/.minikube/bin
	I1206 18:53:46.063010   79322 out.go:303] Setting JSON to false
	I1206 18:53:46.063945   79322 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":5776,"bootTime":1701883050,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 18:53:46.064006   79322 start.go:138] virtualization: kvm guest
	I1206 18:53:46.066483   79322 out.go:177] * [ingress-addon-legacy-283223] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1206 18:53:46.068131   79322 out.go:177]   - MINIKUBE_LOCATION=17740
	I1206 18:53:46.068200   79322 notify.go:220] Checking for updates...
	I1206 18:53:46.069681   79322 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 18:53:46.071332   79322 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17740-63652/kubeconfig
	I1206 18:53:46.072789   79322 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17740-63652/.minikube
	I1206 18:53:46.074334   79322 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 18:53:46.075779   79322 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 18:53:46.077355   79322 driver.go:392] Setting default libvirt URI to qemu:///system
	I1206 18:53:46.112902   79322 out.go:177] * Using the kvm2 driver based on user configuration
	I1206 18:53:46.114422   79322 start.go:298] selected driver: kvm2
	I1206 18:53:46.114435   79322 start.go:902] validating driver "kvm2" against <nil>
	I1206 18:53:46.114445   79322 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 18:53:46.115125   79322 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 18:53:46.115209   79322 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17740-63652/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1206 18:53:46.129546   79322 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1206 18:53:46.129633   79322 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1206 18:53:46.129864   79322 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 18:53:46.129918   79322 cni.go:84] Creating CNI manager for ""
	I1206 18:53:46.129930   79322 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 18:53:46.129939   79322 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1206 18:53:46.129950   79322 start_flags.go:323] config:
	{Name:ingress-addon-legacy-283223 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-283223 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 18:53:46.130069   79322 iso.go:125] acquiring lock: {Name:mk6e9c7dc90243dab7d2a6f322b4b6abe4dff6ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 18:53:46.132826   79322 out.go:177] * Starting control plane node ingress-addon-legacy-283223 in cluster ingress-addon-legacy-283223
	I1206 18:53:46.134286   79322 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1206 18:53:46.166192   79322 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I1206 18:53:46.166245   79322 cache.go:56] Caching tarball of preloaded images
	I1206 18:53:46.166396   79322 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1206 18:53:46.168289   79322 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1206 18:53:46.169783   79322 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1206 18:53:46.202534   79322 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/17740-63652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I1206 18:53:49.122860   79322 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1206 18:53:49.122955   79322 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17740-63652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1206 18:53:50.115380   79322 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I1206 18:53:50.115737   79322 profile.go:148] Saving config to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/config.json ...
	I1206 18:53:50.115765   79322 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/config.json: {Name:mk4317b86d5b2d6d8581cbf90f4df7bf1848bb65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:53:50.115928   79322 start.go:365] acquiring machines lock for ingress-addon-legacy-283223: {Name:mk49ce640266d8c664a871ed4989f65c26b6fae1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1206 18:53:50.115962   79322 start.go:369] acquired machines lock for "ingress-addon-legacy-283223" in 18.01µs
	I1206 18:53:50.115980   79322 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-283223 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Ku
bernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-283223 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 18:53:50.116050   79322 start.go:125] createHost starting for "" (driver="kvm2")
	I1206 18:53:50.118253   79322 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1206 18:53:50.118428   79322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 18:53:50.118455   79322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 18:53:50.132512   79322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34425
	I1206 18:53:50.132981   79322 main.go:141] libmachine: () Calling .GetVersion
	I1206 18:53:50.133598   79322 main.go:141] libmachine: Using API Version  1
	I1206 18:53:50.133625   79322 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 18:53:50.133973   79322 main.go:141] libmachine: () Calling .GetMachineName
	I1206 18:53:50.134157   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetMachineName
	I1206 18:53:50.134338   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .DriverName
	I1206 18:53:50.134547   79322 start.go:159] libmachine.API.Create for "ingress-addon-legacy-283223" (driver="kvm2")
	I1206 18:53:50.134603   79322 client.go:168] LocalClient.Create starting
	I1206 18:53:50.134652   79322 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem
	I1206 18:53:50.134702   79322 main.go:141] libmachine: Decoding PEM data...
	I1206 18:53:50.134728   79322 main.go:141] libmachine: Parsing certificate...
	I1206 18:53:50.134809   79322 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem
	I1206 18:53:50.134842   79322 main.go:141] libmachine: Decoding PEM data...
	I1206 18:53:50.134863   79322 main.go:141] libmachine: Parsing certificate...
	I1206 18:53:50.134893   79322 main.go:141] libmachine: Running pre-create checks...
	I1206 18:53:50.134911   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .PreCreateCheck
	I1206 18:53:50.135238   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetConfigRaw
	I1206 18:53:50.135715   79322 main.go:141] libmachine: Creating machine...
	I1206 18:53:50.135736   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .Create
	I1206 18:53:50.135882   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Creating KVM machine...
	I1206 18:53:50.137102   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | found existing default KVM network
	I1206 18:53:50.137795   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | I1206 18:53:50.137652   79356 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015a10}
	I1206 18:53:50.142926   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | trying to create private KVM network mk-ingress-addon-legacy-283223 192.168.39.0/24...
	I1206 18:53:50.212835   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | private KVM network mk-ingress-addon-legacy-283223 192.168.39.0/24 created
	I1206 18:53:50.212962   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Setting up store path in /home/jenkins/minikube-integration/17740-63652/.minikube/machines/ingress-addon-legacy-283223 ...
	I1206 18:53:50.213020   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Building disk image from file:///home/jenkins/minikube-integration/17740-63652/.minikube/cache/iso/amd64/minikube-v1.32.1-1701387192-17703-amd64.iso
	I1206 18:53:50.213058   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Downloading /home/jenkins/minikube-integration/17740-63652/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17740-63652/.minikube/cache/iso/amd64/minikube-v1.32.1-1701387192-17703-amd64.iso...
	I1206 18:53:50.213128   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | I1206 18:53:50.212790   79356 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17740-63652/.minikube
	I1206 18:53:50.449400   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | I1206 18:53:50.449275   79356 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/ingress-addon-legacy-283223/id_rsa...
	I1206 18:53:50.686052   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | I1206 18:53:50.685851   79356 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/ingress-addon-legacy-283223/ingress-addon-legacy-283223.rawdisk...
	I1206 18:53:50.686101   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | Writing magic tar header
	I1206 18:53:50.686125   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | Writing SSH key tar header
	I1206 18:53:50.686145   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | I1206 18:53:50.685991   79356 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17740-63652/.minikube/machines/ingress-addon-legacy-283223 ...
	I1206 18:53:50.686165   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Setting executable bit set on /home/jenkins/minikube-integration/17740-63652/.minikube/machines/ingress-addon-legacy-283223 (perms=drwx------)
	I1206 18:53:50.686189   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Setting executable bit set on /home/jenkins/minikube-integration/17740-63652/.minikube/machines (perms=drwxr-xr-x)
	I1206 18:53:50.686210   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/ingress-addon-legacy-283223
	I1206 18:53:50.686228   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Setting executable bit set on /home/jenkins/minikube-integration/17740-63652/.minikube (perms=drwxr-xr-x)
	I1206 18:53:50.686260   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17740-63652/.minikube/machines
	I1206 18:53:50.686297   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Setting executable bit set on /home/jenkins/minikube-integration/17740-63652 (perms=drwxrwxr-x)
	I1206 18:53:50.686345   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1206 18:53:50.686362   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17740-63652/.minikube
	I1206 18:53:50.686383   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17740-63652
	I1206 18:53:50.686398   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1206 18:53:50.686416   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | Checking permissions on dir: /home/jenkins
	I1206 18:53:50.686429   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | Checking permissions on dir: /home
	I1206 18:53:50.686444   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1206 18:53:50.686458   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | Skipping /home - not owner
	I1206 18:53:50.686471   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Creating domain...
	I1206 18:53:50.687382   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) define libvirt domain using xml: 
	I1206 18:53:50.687409   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) <domain type='kvm'>
	I1206 18:53:50.687421   79322 main.go:141] libmachine: (ingress-addon-legacy-283223)   <name>ingress-addon-legacy-283223</name>
	I1206 18:53:50.687430   79322 main.go:141] libmachine: (ingress-addon-legacy-283223)   <memory unit='MiB'>4096</memory>
	I1206 18:53:50.687452   79322 main.go:141] libmachine: (ingress-addon-legacy-283223)   <vcpu>2</vcpu>
	I1206 18:53:50.687467   79322 main.go:141] libmachine: (ingress-addon-legacy-283223)   <features>
	I1206 18:53:50.687479   79322 main.go:141] libmachine: (ingress-addon-legacy-283223)     <acpi/>
	I1206 18:53:50.687492   79322 main.go:141] libmachine: (ingress-addon-legacy-283223)     <apic/>
	I1206 18:53:50.687506   79322 main.go:141] libmachine: (ingress-addon-legacy-283223)     <pae/>
	I1206 18:53:50.687523   79322 main.go:141] libmachine: (ingress-addon-legacy-283223)     
	I1206 18:53:50.687538   79322 main.go:141] libmachine: (ingress-addon-legacy-283223)   </features>
	I1206 18:53:50.687552   79322 main.go:141] libmachine: (ingress-addon-legacy-283223)   <cpu mode='host-passthrough'>
	I1206 18:53:50.687566   79322 main.go:141] libmachine: (ingress-addon-legacy-283223)   
	I1206 18:53:50.687579   79322 main.go:141] libmachine: (ingress-addon-legacy-283223)   </cpu>
	I1206 18:53:50.687593   79322 main.go:141] libmachine: (ingress-addon-legacy-283223)   <os>
	I1206 18:53:50.687617   79322 main.go:141] libmachine: (ingress-addon-legacy-283223)     <type>hvm</type>
	I1206 18:53:50.687632   79322 main.go:141] libmachine: (ingress-addon-legacy-283223)     <boot dev='cdrom'/>
	I1206 18:53:50.687646   79322 main.go:141] libmachine: (ingress-addon-legacy-283223)     <boot dev='hd'/>
	I1206 18:53:50.687666   79322 main.go:141] libmachine: (ingress-addon-legacy-283223)     <bootmenu enable='no'/>
	I1206 18:53:50.687679   79322 main.go:141] libmachine: (ingress-addon-legacy-283223)   </os>
	I1206 18:53:50.687708   79322 main.go:141] libmachine: (ingress-addon-legacy-283223)   <devices>
	I1206 18:53:50.687738   79322 main.go:141] libmachine: (ingress-addon-legacy-283223)     <disk type='file' device='cdrom'>
	I1206 18:53:50.687782   79322 main.go:141] libmachine: (ingress-addon-legacy-283223)       <source file='/home/jenkins/minikube-integration/17740-63652/.minikube/machines/ingress-addon-legacy-283223/boot2docker.iso'/>
	I1206 18:53:50.687814   79322 main.go:141] libmachine: (ingress-addon-legacy-283223)       <target dev='hdc' bus='scsi'/>
	I1206 18:53:50.687831   79322 main.go:141] libmachine: (ingress-addon-legacy-283223)       <readonly/>
	I1206 18:53:50.687844   79322 main.go:141] libmachine: (ingress-addon-legacy-283223)     </disk>
	I1206 18:53:50.687857   79322 main.go:141] libmachine: (ingress-addon-legacy-283223)     <disk type='file' device='disk'>
	I1206 18:53:50.687871   79322 main.go:141] libmachine: (ingress-addon-legacy-283223)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1206 18:53:50.687893   79322 main.go:141] libmachine: (ingress-addon-legacy-283223)       <source file='/home/jenkins/minikube-integration/17740-63652/.minikube/machines/ingress-addon-legacy-283223/ingress-addon-legacy-283223.rawdisk'/>
	I1206 18:53:50.687909   79322 main.go:141] libmachine: (ingress-addon-legacy-283223)       <target dev='hda' bus='virtio'/>
	I1206 18:53:50.687919   79322 main.go:141] libmachine: (ingress-addon-legacy-283223)     </disk>
	I1206 18:53:50.687932   79322 main.go:141] libmachine: (ingress-addon-legacy-283223)     <interface type='network'>
	I1206 18:53:50.687951   79322 main.go:141] libmachine: (ingress-addon-legacy-283223)       <source network='mk-ingress-addon-legacy-283223'/>
	I1206 18:53:50.687964   79322 main.go:141] libmachine: (ingress-addon-legacy-283223)       <model type='virtio'/>
	I1206 18:53:50.687978   79322 main.go:141] libmachine: (ingress-addon-legacy-283223)     </interface>
	I1206 18:53:50.687991   79322 main.go:141] libmachine: (ingress-addon-legacy-283223)     <interface type='network'>
	I1206 18:53:50.688005   79322 main.go:141] libmachine: (ingress-addon-legacy-283223)       <source network='default'/>
	I1206 18:53:50.688014   79322 main.go:141] libmachine: (ingress-addon-legacy-283223)       <model type='virtio'/>
	I1206 18:53:50.688022   79322 main.go:141] libmachine: (ingress-addon-legacy-283223)     </interface>
	I1206 18:53:50.688051   79322 main.go:141] libmachine: (ingress-addon-legacy-283223)     <serial type='pty'>
	I1206 18:53:50.688066   79322 main.go:141] libmachine: (ingress-addon-legacy-283223)       <target port='0'/>
	I1206 18:53:50.688079   79322 main.go:141] libmachine: (ingress-addon-legacy-283223)     </serial>
	I1206 18:53:50.688093   79322 main.go:141] libmachine: (ingress-addon-legacy-283223)     <console type='pty'>
	I1206 18:53:50.688106   79322 main.go:141] libmachine: (ingress-addon-legacy-283223)       <target type='serial' port='0'/>
	I1206 18:53:50.688128   79322 main.go:141] libmachine: (ingress-addon-legacy-283223)     </console>
	I1206 18:53:50.688144   79322 main.go:141] libmachine: (ingress-addon-legacy-283223)     <rng model='virtio'>
	I1206 18:53:50.688162   79322 main.go:141] libmachine: (ingress-addon-legacy-283223)       <backend model='random'>/dev/random</backend>
	I1206 18:53:50.688173   79322 main.go:141] libmachine: (ingress-addon-legacy-283223)     </rng>
	I1206 18:53:50.688187   79322 main.go:141] libmachine: (ingress-addon-legacy-283223)     
	I1206 18:53:50.688199   79322 main.go:141] libmachine: (ingress-addon-legacy-283223)     
	I1206 18:53:50.688214   79322 main.go:141] libmachine: (ingress-addon-legacy-283223)   </devices>
	I1206 18:53:50.688232   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) </domain>
	I1206 18:53:50.688248   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) 
	I1206 18:53:50.692522   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | domain ingress-addon-legacy-283223 has defined MAC address 52:54:00:c9:57:0e in network default
	I1206 18:53:50.693255   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Ensuring networks are active...
	I1206 18:53:50.693281   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | domain ingress-addon-legacy-283223 has defined MAC address 52:54:00:95:92:3d in network mk-ingress-addon-legacy-283223
	I1206 18:53:50.693932   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Ensuring network default is active
	I1206 18:53:50.694227   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Ensuring network mk-ingress-addon-legacy-283223 is active
	I1206 18:53:50.694816   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Getting domain xml...
	I1206 18:53:50.695509   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Creating domain...
	I1206 18:53:51.910591   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Waiting to get IP...
	I1206 18:53:51.911474   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | domain ingress-addon-legacy-283223 has defined MAC address 52:54:00:95:92:3d in network mk-ingress-addon-legacy-283223
	I1206 18:53:51.911859   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | unable to find current IP address of domain ingress-addon-legacy-283223 in network mk-ingress-addon-legacy-283223
	I1206 18:53:51.911888   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | I1206 18:53:51.911831   79356 retry.go:31] will retry after 202.765357ms: waiting for machine to come up
	I1206 18:53:52.116308   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | domain ingress-addon-legacy-283223 has defined MAC address 52:54:00:95:92:3d in network mk-ingress-addon-legacy-283223
	I1206 18:53:52.116712   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | unable to find current IP address of domain ingress-addon-legacy-283223 in network mk-ingress-addon-legacy-283223
	I1206 18:53:52.116743   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | I1206 18:53:52.116659   79356 retry.go:31] will retry after 329.598123ms: waiting for machine to come up
	I1206 18:53:52.448472   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | domain ingress-addon-legacy-283223 has defined MAC address 52:54:00:95:92:3d in network mk-ingress-addon-legacy-283223
	I1206 18:53:52.448924   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | unable to find current IP address of domain ingress-addon-legacy-283223 in network mk-ingress-addon-legacy-283223
	I1206 18:53:52.448957   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | I1206 18:53:52.448861   79356 retry.go:31] will retry after 311.796192ms: waiting for machine to come up
	I1206 18:53:52.762310   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | domain ingress-addon-legacy-283223 has defined MAC address 52:54:00:95:92:3d in network mk-ingress-addon-legacy-283223
	I1206 18:53:52.762768   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | unable to find current IP address of domain ingress-addon-legacy-283223 in network mk-ingress-addon-legacy-283223
	I1206 18:53:52.762799   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | I1206 18:53:52.762705   79356 retry.go:31] will retry after 466.575789ms: waiting for machine to come up
	I1206 18:53:53.231314   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | domain ingress-addon-legacy-283223 has defined MAC address 52:54:00:95:92:3d in network mk-ingress-addon-legacy-283223
	I1206 18:53:53.231749   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | unable to find current IP address of domain ingress-addon-legacy-283223 in network mk-ingress-addon-legacy-283223
	I1206 18:53:53.231775   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | I1206 18:53:53.231713   79356 retry.go:31] will retry after 595.148208ms: waiting for machine to come up
	I1206 18:53:53.828445   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | domain ingress-addon-legacy-283223 has defined MAC address 52:54:00:95:92:3d in network mk-ingress-addon-legacy-283223
	I1206 18:53:53.828885   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | unable to find current IP address of domain ingress-addon-legacy-283223 in network mk-ingress-addon-legacy-283223
	I1206 18:53:53.828926   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | I1206 18:53:53.828831   79356 retry.go:31] will retry after 672.639589ms: waiting for machine to come up
	I1206 18:53:54.502661   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | domain ingress-addon-legacy-283223 has defined MAC address 52:54:00:95:92:3d in network mk-ingress-addon-legacy-283223
	I1206 18:53:54.502981   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | unable to find current IP address of domain ingress-addon-legacy-283223 in network mk-ingress-addon-legacy-283223
	I1206 18:53:54.503013   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | I1206 18:53:54.502923   79356 retry.go:31] will retry after 932.314626ms: waiting for machine to come up
	I1206 18:53:55.436588   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | domain ingress-addon-legacy-283223 has defined MAC address 52:54:00:95:92:3d in network mk-ingress-addon-legacy-283223
	I1206 18:53:55.436967   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | unable to find current IP address of domain ingress-addon-legacy-283223 in network mk-ingress-addon-legacy-283223
	I1206 18:53:55.436997   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | I1206 18:53:55.436949   79356 retry.go:31] will retry after 1.089149549s: waiting for machine to come up
	I1206 18:53:56.528266   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | domain ingress-addon-legacy-283223 has defined MAC address 52:54:00:95:92:3d in network mk-ingress-addon-legacy-283223
	I1206 18:53:56.528741   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | unable to find current IP address of domain ingress-addon-legacy-283223 in network mk-ingress-addon-legacy-283223
	I1206 18:53:56.528772   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | I1206 18:53:56.528658   79356 retry.go:31] will retry after 1.854001298s: waiting for machine to come up
	I1206 18:53:58.384664   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | domain ingress-addon-legacy-283223 has defined MAC address 52:54:00:95:92:3d in network mk-ingress-addon-legacy-283223
	I1206 18:53:58.385173   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | unable to find current IP address of domain ingress-addon-legacy-283223 in network mk-ingress-addon-legacy-283223
	I1206 18:53:58.385208   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | I1206 18:53:58.385120   79356 retry.go:31] will retry after 2.275249658s: waiting for machine to come up
	I1206 18:54:00.662140   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | domain ingress-addon-legacy-283223 has defined MAC address 52:54:00:95:92:3d in network mk-ingress-addon-legacy-283223
	I1206 18:54:00.662647   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | unable to find current IP address of domain ingress-addon-legacy-283223 in network mk-ingress-addon-legacy-283223
	I1206 18:54:00.662683   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | I1206 18:54:00.662592   79356 retry.go:31] will retry after 1.779474818s: waiting for machine to come up
	I1206 18:54:02.443252   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | domain ingress-addon-legacy-283223 has defined MAC address 52:54:00:95:92:3d in network mk-ingress-addon-legacy-283223
	I1206 18:54:02.443642   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | unable to find current IP address of domain ingress-addon-legacy-283223 in network mk-ingress-addon-legacy-283223
	I1206 18:54:02.443671   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | I1206 18:54:02.443589   79356 retry.go:31] will retry after 2.745668719s: waiting for machine to come up
	I1206 18:54:05.190798   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | domain ingress-addon-legacy-283223 has defined MAC address 52:54:00:95:92:3d in network mk-ingress-addon-legacy-283223
	I1206 18:54:05.191126   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | unable to find current IP address of domain ingress-addon-legacy-283223 in network mk-ingress-addon-legacy-283223
	I1206 18:54:05.191157   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | I1206 18:54:05.191070   79356 retry.go:31] will retry after 3.29996064s: waiting for machine to come up
	I1206 18:54:08.494713   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | domain ingress-addon-legacy-283223 has defined MAC address 52:54:00:95:92:3d in network mk-ingress-addon-legacy-283223
	I1206 18:54:08.495270   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | unable to find current IP address of domain ingress-addon-legacy-283223 in network mk-ingress-addon-legacy-283223
	I1206 18:54:08.495294   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | I1206 18:54:08.495158   79356 retry.go:31] will retry after 4.488063812s: waiting for machine to come up
	I1206 18:54:12.987233   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | domain ingress-addon-legacy-283223 has defined MAC address 52:54:00:95:92:3d in network mk-ingress-addon-legacy-283223
	I1206 18:54:12.987723   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Found IP for machine: 192.168.39.55
	I1206 18:54:12.987755   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Reserving static IP address...
	I1206 18:54:12.987771   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | domain ingress-addon-legacy-283223 has current primary IP address 192.168.39.55 and MAC address 52:54:00:95:92:3d in network mk-ingress-addon-legacy-283223
	I1206 18:54:12.988124   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | unable to find host DHCP lease matching {name: "ingress-addon-legacy-283223", mac: "52:54:00:95:92:3d", ip: "192.168.39.55"} in network mk-ingress-addon-legacy-283223
	I1206 18:54:13.060858   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | Getting to WaitForSSH function...
	I1206 18:54:13.060890   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Reserved static IP address: 192.168.39.55
	I1206 18:54:13.060910   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Waiting for SSH to be available...
	I1206 18:54:13.063504   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | domain ingress-addon-legacy-283223 has defined MAC address 52:54:00:95:92:3d in network mk-ingress-addon-legacy-283223
	I1206 18:54:13.063872   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:92:3d", ip: ""} in network mk-ingress-addon-legacy-283223: {Iface:virbr1 ExpiryTime:2023-12-06 19:54:06 +0000 UTC Type:0 Mac:52:54:00:95:92:3d Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:minikube Clientid:01:52:54:00:95:92:3d}
	I1206 18:54:13.063903   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | domain ingress-addon-legacy-283223 has defined IP address 192.168.39.55 and MAC address 52:54:00:95:92:3d in network mk-ingress-addon-legacy-283223
	I1206 18:54:13.063993   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | Using SSH client type: external
	I1206 18:54:13.064016   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | Using SSH private key: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/ingress-addon-legacy-283223/id_rsa (-rw-------)
	I1206 18:54:13.064054   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.55 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17740-63652/.minikube/machines/ingress-addon-legacy-283223/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1206 18:54:13.064078   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | About to run SSH command:
	I1206 18:54:13.064095   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | exit 0
	I1206 18:54:13.160902   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | SSH cmd err, output: <nil>: 
	I1206 18:54:13.161186   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) KVM machine creation complete!
	I1206 18:54:13.161502   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetConfigRaw
	I1206 18:54:13.162024   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .DriverName
	I1206 18:54:13.162233   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .DriverName
	I1206 18:54:13.162392   79322 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1206 18:54:13.162410   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetState
	I1206 18:54:13.163788   79322 main.go:141] libmachine: Detecting operating system of created instance...
	I1206 18:54:13.163826   79322 main.go:141] libmachine: Waiting for SSH to be available...
	I1206 18:54:13.163833   79322 main.go:141] libmachine: Getting to WaitForSSH function...
	I1206 18:54:13.163849   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetSSHHostname
	I1206 18:54:13.166214   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | domain ingress-addon-legacy-283223 has defined MAC address 52:54:00:95:92:3d in network mk-ingress-addon-legacy-283223
	I1206 18:54:13.166517   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:92:3d", ip: ""} in network mk-ingress-addon-legacy-283223: {Iface:virbr1 ExpiryTime:2023-12-06 19:54:06 +0000 UTC Type:0 Mac:52:54:00:95:92:3d Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ingress-addon-legacy-283223 Clientid:01:52:54:00:95:92:3d}
	I1206 18:54:13.166543   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | domain ingress-addon-legacy-283223 has defined IP address 192.168.39.55 and MAC address 52:54:00:95:92:3d in network mk-ingress-addon-legacy-283223
	I1206 18:54:13.166681   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetSSHPort
	I1206 18:54:13.166864   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetSSHKeyPath
	I1206 18:54:13.167022   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetSSHKeyPath
	I1206 18:54:13.167146   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetSSHUsername
	I1206 18:54:13.167296   79322 main.go:141] libmachine: Using SSH client type: native
	I1206 18:54:13.167673   79322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I1206 18:54:13.167687   79322 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1206 18:54:13.296592   79322 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1206 18:54:13.296621   79322 main.go:141] libmachine: Detecting the provisioner...
	I1206 18:54:13.296632   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetSSHHostname
	I1206 18:54:13.299289   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | domain ingress-addon-legacy-283223 has defined MAC address 52:54:00:95:92:3d in network mk-ingress-addon-legacy-283223
	I1206 18:54:13.299563   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:92:3d", ip: ""} in network mk-ingress-addon-legacy-283223: {Iface:virbr1 ExpiryTime:2023-12-06 19:54:06 +0000 UTC Type:0 Mac:52:54:00:95:92:3d Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ingress-addon-legacy-283223 Clientid:01:52:54:00:95:92:3d}
	I1206 18:54:13.299580   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | domain ingress-addon-legacy-283223 has defined IP address 192.168.39.55 and MAC address 52:54:00:95:92:3d in network mk-ingress-addon-legacy-283223
	I1206 18:54:13.299748   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetSSHPort
	I1206 18:54:13.299972   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetSSHKeyPath
	I1206 18:54:13.300152   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetSSHKeyPath
	I1206 18:54:13.300300   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetSSHUsername
	I1206 18:54:13.300505   79322 main.go:141] libmachine: Using SSH client type: native
	I1206 18:54:13.300848   79322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I1206 18:54:13.300862   79322 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1206 18:54:13.429980   79322 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gf888a99-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1206 18:54:13.430080   79322 main.go:141] libmachine: found compatible host: buildroot
	I1206 18:54:13.430088   79322 main.go:141] libmachine: Provisioning with buildroot...
	I1206 18:54:13.430097   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetMachineName
	I1206 18:54:13.430384   79322 buildroot.go:166] provisioning hostname "ingress-addon-legacy-283223"
	I1206 18:54:13.430414   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetMachineName
	I1206 18:54:13.430620   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetSSHHostname
	I1206 18:54:13.433131   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | domain ingress-addon-legacy-283223 has defined MAC address 52:54:00:95:92:3d in network mk-ingress-addon-legacy-283223
	I1206 18:54:13.433480   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:92:3d", ip: ""} in network mk-ingress-addon-legacy-283223: {Iface:virbr1 ExpiryTime:2023-12-06 19:54:06 +0000 UTC Type:0 Mac:52:54:00:95:92:3d Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ingress-addon-legacy-283223 Clientid:01:52:54:00:95:92:3d}
	I1206 18:54:13.433511   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | domain ingress-addon-legacy-283223 has defined IP address 192.168.39.55 and MAC address 52:54:00:95:92:3d in network mk-ingress-addon-legacy-283223
	I1206 18:54:13.433672   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetSSHPort
	I1206 18:54:13.433871   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetSSHKeyPath
	I1206 18:54:13.434052   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetSSHKeyPath
	I1206 18:54:13.434185   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetSSHUsername
	I1206 18:54:13.434374   79322 main.go:141] libmachine: Using SSH client type: native
	I1206 18:54:13.434831   79322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I1206 18:54:13.434853   79322 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-283223 && echo "ingress-addon-legacy-283223" | sudo tee /etc/hostname
	I1206 18:54:13.573855   79322 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-283223
	
	I1206 18:54:13.573883   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetSSHHostname
	I1206 18:54:13.576439   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | domain ingress-addon-legacy-283223 has defined MAC address 52:54:00:95:92:3d in network mk-ingress-addon-legacy-283223
	I1206 18:54:13.576741   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:92:3d", ip: ""} in network mk-ingress-addon-legacy-283223: {Iface:virbr1 ExpiryTime:2023-12-06 19:54:06 +0000 UTC Type:0 Mac:52:54:00:95:92:3d Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ingress-addon-legacy-283223 Clientid:01:52:54:00:95:92:3d}
	I1206 18:54:13.576785   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | domain ingress-addon-legacy-283223 has defined IP address 192.168.39.55 and MAC address 52:54:00:95:92:3d in network mk-ingress-addon-legacy-283223
	I1206 18:54:13.576911   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetSSHPort
	I1206 18:54:13.577121   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetSSHKeyPath
	I1206 18:54:13.577301   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetSSHKeyPath
	I1206 18:54:13.577461   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetSSHUsername
	I1206 18:54:13.577604   79322 main.go:141] libmachine: Using SSH client type: native
	I1206 18:54:13.577911   79322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I1206 18:54:13.577928   79322 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-283223' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-283223/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-283223' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 18:54:13.713856   79322 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1206 18:54:13.713887   79322 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17740-63652/.minikube CaCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17740-63652/.minikube}
	I1206 18:54:13.713910   79322 buildroot.go:174] setting up certificates
	I1206 18:54:13.713925   79322 provision.go:83] configureAuth start
	I1206 18:54:13.713938   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetMachineName
	I1206 18:54:13.714273   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetIP
	I1206 18:54:13.716993   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | domain ingress-addon-legacy-283223 has defined MAC address 52:54:00:95:92:3d in network mk-ingress-addon-legacy-283223
	I1206 18:54:13.717362   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:92:3d", ip: ""} in network mk-ingress-addon-legacy-283223: {Iface:virbr1 ExpiryTime:2023-12-06 19:54:06 +0000 UTC Type:0 Mac:52:54:00:95:92:3d Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ingress-addon-legacy-283223 Clientid:01:52:54:00:95:92:3d}
	I1206 18:54:13.717387   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | domain ingress-addon-legacy-283223 has defined IP address 192.168.39.55 and MAC address 52:54:00:95:92:3d in network mk-ingress-addon-legacy-283223
	I1206 18:54:13.717537   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetSSHHostname
	I1206 18:54:13.719713   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | domain ingress-addon-legacy-283223 has defined MAC address 52:54:00:95:92:3d in network mk-ingress-addon-legacy-283223
	I1206 18:54:13.720079   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:92:3d", ip: ""} in network mk-ingress-addon-legacy-283223: {Iface:virbr1 ExpiryTime:2023-12-06 19:54:06 +0000 UTC Type:0 Mac:52:54:00:95:92:3d Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ingress-addon-legacy-283223 Clientid:01:52:54:00:95:92:3d}
	I1206 18:54:13.720100   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | domain ingress-addon-legacy-283223 has defined IP address 192.168.39.55 and MAC address 52:54:00:95:92:3d in network mk-ingress-addon-legacy-283223
	I1206 18:54:13.720263   79322 provision.go:138] copyHostCerts
	I1206 18:54:13.720290   79322 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem
	I1206 18:54:13.720321   79322 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem, removing ...
	I1206 18:54:13.720333   79322 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem
	I1206 18:54:13.720400   79322 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem (1082 bytes)
	I1206 18:54:13.720473   79322 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem
	I1206 18:54:13.720494   79322 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem, removing ...
	I1206 18:54:13.720500   79322 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem
	I1206 18:54:13.720522   79322 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem (1123 bytes)
	I1206 18:54:13.720563   79322 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem
	I1206 18:54:13.720578   79322 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem, removing ...
	I1206 18:54:13.720584   79322 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem
	I1206 18:54:13.720604   79322 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem (1679 bytes)
	I1206 18:54:13.720700   79322 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-283223 san=[192.168.39.55 192.168.39.55 localhost 127.0.0.1 minikube ingress-addon-legacy-283223]
	I1206 18:54:13.784256   79322 provision.go:172] copyRemoteCerts
	I1206 18:54:13.784324   79322 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 18:54:13.784351   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetSSHHostname
	I1206 18:54:13.787145   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | domain ingress-addon-legacy-283223 has defined MAC address 52:54:00:95:92:3d in network mk-ingress-addon-legacy-283223
	I1206 18:54:13.787481   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:92:3d", ip: ""} in network mk-ingress-addon-legacy-283223: {Iface:virbr1 ExpiryTime:2023-12-06 19:54:06 +0000 UTC Type:0 Mac:52:54:00:95:92:3d Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ingress-addon-legacy-283223 Clientid:01:52:54:00:95:92:3d}
	I1206 18:54:13.787510   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | domain ingress-addon-legacy-283223 has defined IP address 192.168.39.55 and MAC address 52:54:00:95:92:3d in network mk-ingress-addon-legacy-283223
	I1206 18:54:13.787760   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetSSHPort
	I1206 18:54:13.787962   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetSSHKeyPath
	I1206 18:54:13.788117   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetSSHUsername
	I1206 18:54:13.788261   79322 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/ingress-addon-legacy-283223/id_rsa Username:docker}
	I1206 18:54:13.882579   79322 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1206 18:54:13.882659   79322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 18:54:13.906822   79322 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1206 18:54:13.906902   79322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 18:54:13.929695   79322 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1206 18:54:13.929782   79322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1206 18:54:13.952388   79322 provision.go:86] duration metric: configureAuth took 238.445597ms
	I1206 18:54:13.952425   79322 buildroot.go:189] setting minikube options for container-runtime
	I1206 18:54:13.952621   79322 config.go:182] Loaded profile config "ingress-addon-legacy-283223": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1206 18:54:13.952696   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetSSHHostname
	I1206 18:54:13.955364   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | domain ingress-addon-legacy-283223 has defined MAC address 52:54:00:95:92:3d in network mk-ingress-addon-legacy-283223
	I1206 18:54:13.955722   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:92:3d", ip: ""} in network mk-ingress-addon-legacy-283223: {Iface:virbr1 ExpiryTime:2023-12-06 19:54:06 +0000 UTC Type:0 Mac:52:54:00:95:92:3d Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ingress-addon-legacy-283223 Clientid:01:52:54:00:95:92:3d}
	I1206 18:54:13.955752   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | domain ingress-addon-legacy-283223 has defined IP address 192.168.39.55 and MAC address 52:54:00:95:92:3d in network mk-ingress-addon-legacy-283223
	I1206 18:54:13.955926   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetSSHPort
	I1206 18:54:13.956150   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetSSHKeyPath
	I1206 18:54:13.956320   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetSSHKeyPath
	I1206 18:54:13.956460   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetSSHUsername
	I1206 18:54:13.956620   79322 main.go:141] libmachine: Using SSH client type: native
	I1206 18:54:13.957067   79322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I1206 18:54:13.957092   79322 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 18:54:14.269387   79322 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 18:54:14.269420   79322 main.go:141] libmachine: Checking connection to Docker...
	I1206 18:54:14.269433   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetURL
	I1206 18:54:14.270777   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | Using libvirt version 6000000
	I1206 18:54:14.273218   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | domain ingress-addon-legacy-283223 has defined MAC address 52:54:00:95:92:3d in network mk-ingress-addon-legacy-283223
	I1206 18:54:14.273608   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:92:3d", ip: ""} in network mk-ingress-addon-legacy-283223: {Iface:virbr1 ExpiryTime:2023-12-06 19:54:06 +0000 UTC Type:0 Mac:52:54:00:95:92:3d Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ingress-addon-legacy-283223 Clientid:01:52:54:00:95:92:3d}
	I1206 18:54:14.273644   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | domain ingress-addon-legacy-283223 has defined IP address 192.168.39.55 and MAC address 52:54:00:95:92:3d in network mk-ingress-addon-legacy-283223
	I1206 18:54:14.273762   79322 main.go:141] libmachine: Docker is up and running!
	I1206 18:54:14.273778   79322 main.go:141] libmachine: Reticulating splines...
	I1206 18:54:14.273787   79322 client.go:171] LocalClient.Create took 24.13917073s
	I1206 18:54:14.273825   79322 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-283223" took 24.139282807s
	I1206 18:54:14.273848   79322 start.go:300] post-start starting for "ingress-addon-legacy-283223" (driver="kvm2")
	I1206 18:54:14.273864   79322 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 18:54:14.273887   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .DriverName
	I1206 18:54:14.274196   79322 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 18:54:14.274225   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetSSHHostname
	I1206 18:54:14.276464   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | domain ingress-addon-legacy-283223 has defined MAC address 52:54:00:95:92:3d in network mk-ingress-addon-legacy-283223
	I1206 18:54:14.276799   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:92:3d", ip: ""} in network mk-ingress-addon-legacy-283223: {Iface:virbr1 ExpiryTime:2023-12-06 19:54:06 +0000 UTC Type:0 Mac:52:54:00:95:92:3d Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ingress-addon-legacy-283223 Clientid:01:52:54:00:95:92:3d}
	I1206 18:54:14.276835   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | domain ingress-addon-legacy-283223 has defined IP address 192.168.39.55 and MAC address 52:54:00:95:92:3d in network mk-ingress-addon-legacy-283223
	I1206 18:54:14.276925   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetSSHPort
	I1206 18:54:14.277101   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetSSHKeyPath
	I1206 18:54:14.277282   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetSSHUsername
	I1206 18:54:14.277448   79322 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/ingress-addon-legacy-283223/id_rsa Username:docker}
	I1206 18:54:14.370704   79322 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 18:54:14.375374   79322 info.go:137] Remote host: Buildroot 2021.02.12
	I1206 18:54:14.375400   79322 filesync.go:126] Scanning /home/jenkins/minikube-integration/17740-63652/.minikube/addons for local assets ...
	I1206 18:54:14.375473   79322 filesync.go:126] Scanning /home/jenkins/minikube-integration/17740-63652/.minikube/files for local assets ...
	I1206 18:54:14.375587   79322 filesync.go:149] local asset: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem -> 708342.pem in /etc/ssl/certs
	I1206 18:54:14.375605   79322 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem -> /etc/ssl/certs/708342.pem
	I1206 18:54:14.375751   79322 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 18:54:14.384500   79322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem --> /etc/ssl/certs/708342.pem (1708 bytes)
	I1206 18:54:14.410558   79322 start.go:303] post-start completed in 136.692122ms
	I1206 18:54:14.410613   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetConfigRaw
	I1206 18:54:14.411151   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetIP
	I1206 18:54:14.413890   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | domain ingress-addon-legacy-283223 has defined MAC address 52:54:00:95:92:3d in network mk-ingress-addon-legacy-283223
	I1206 18:54:14.414229   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:92:3d", ip: ""} in network mk-ingress-addon-legacy-283223: {Iface:virbr1 ExpiryTime:2023-12-06 19:54:06 +0000 UTC Type:0 Mac:52:54:00:95:92:3d Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ingress-addon-legacy-283223 Clientid:01:52:54:00:95:92:3d}
	I1206 18:54:14.414272   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | domain ingress-addon-legacy-283223 has defined IP address 192.168.39.55 and MAC address 52:54:00:95:92:3d in network mk-ingress-addon-legacy-283223
	I1206 18:54:14.414476   79322 profile.go:148] Saving config to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/config.json ...
	I1206 18:54:14.414642   79322 start.go:128] duration metric: createHost completed in 24.298582115s
	I1206 18:54:14.414664   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetSSHHostname
	I1206 18:54:14.416756   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | domain ingress-addon-legacy-283223 has defined MAC address 52:54:00:95:92:3d in network mk-ingress-addon-legacy-283223
	I1206 18:54:14.417095   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:92:3d", ip: ""} in network mk-ingress-addon-legacy-283223: {Iface:virbr1 ExpiryTime:2023-12-06 19:54:06 +0000 UTC Type:0 Mac:52:54:00:95:92:3d Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ingress-addon-legacy-283223 Clientid:01:52:54:00:95:92:3d}
	I1206 18:54:14.417126   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | domain ingress-addon-legacy-283223 has defined IP address 192.168.39.55 and MAC address 52:54:00:95:92:3d in network mk-ingress-addon-legacy-283223
	I1206 18:54:14.417297   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetSSHPort
	I1206 18:54:14.417517   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetSSHKeyPath
	I1206 18:54:14.417666   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetSSHKeyPath
	I1206 18:54:14.417806   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetSSHUsername
	I1206 18:54:14.417969   79322 main.go:141] libmachine: Using SSH client type: native
	I1206 18:54:14.418275   79322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I1206 18:54:14.418288   79322 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1206 18:54:14.546094   79322 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701888854.524919221
	
	I1206 18:54:14.546125   79322 fix.go:206] guest clock: 1701888854.524919221
	I1206 18:54:14.546138   79322 fix.go:219] Guest: 2023-12-06 18:54:14.524919221 +0000 UTC Remote: 2023-12-06 18:54:14.414652188 +0000 UTC m=+28.402613046 (delta=110.267033ms)
	I1206 18:54:14.546181   79322 fix.go:190] guest clock delta is within tolerance: 110.267033ms
	I1206 18:54:14.546192   79322 start.go:83] releasing machines lock for "ingress-addon-legacy-283223", held for 24.430219893s
	I1206 18:54:14.546224   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .DriverName
	I1206 18:54:14.546549   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetIP
	I1206 18:54:14.549060   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | domain ingress-addon-legacy-283223 has defined MAC address 52:54:00:95:92:3d in network mk-ingress-addon-legacy-283223
	I1206 18:54:14.549447   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:92:3d", ip: ""} in network mk-ingress-addon-legacy-283223: {Iface:virbr1 ExpiryTime:2023-12-06 19:54:06 +0000 UTC Type:0 Mac:52:54:00:95:92:3d Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ingress-addon-legacy-283223 Clientid:01:52:54:00:95:92:3d}
	I1206 18:54:14.549497   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | domain ingress-addon-legacy-283223 has defined IP address 192.168.39.55 and MAC address 52:54:00:95:92:3d in network mk-ingress-addon-legacy-283223
	I1206 18:54:14.549610   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .DriverName
	I1206 18:54:14.550117   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .DriverName
	I1206 18:54:14.550270   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .DriverName
	I1206 18:54:14.550339   79322 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 18:54:14.550382   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetSSHHostname
	I1206 18:54:14.550496   79322 ssh_runner.go:195] Run: cat /version.json
	I1206 18:54:14.550524   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetSSHHostname
	I1206 18:54:14.553840   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | domain ingress-addon-legacy-283223 has defined MAC address 52:54:00:95:92:3d in network mk-ingress-addon-legacy-283223
	I1206 18:54:14.553874   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | domain ingress-addon-legacy-283223 has defined MAC address 52:54:00:95:92:3d in network mk-ingress-addon-legacy-283223
	I1206 18:54:14.554218   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:92:3d", ip: ""} in network mk-ingress-addon-legacy-283223: {Iface:virbr1 ExpiryTime:2023-12-06 19:54:06 +0000 UTC Type:0 Mac:52:54:00:95:92:3d Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ingress-addon-legacy-283223 Clientid:01:52:54:00:95:92:3d}
	I1206 18:54:14.554254   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:92:3d", ip: ""} in network mk-ingress-addon-legacy-283223: {Iface:virbr1 ExpiryTime:2023-12-06 19:54:06 +0000 UTC Type:0 Mac:52:54:00:95:92:3d Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ingress-addon-legacy-283223 Clientid:01:52:54:00:95:92:3d}
	I1206 18:54:14.554284   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | domain ingress-addon-legacy-283223 has defined IP address 192.168.39.55 and MAC address 52:54:00:95:92:3d in network mk-ingress-addon-legacy-283223
	I1206 18:54:14.554304   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | domain ingress-addon-legacy-283223 has defined IP address 192.168.39.55 and MAC address 52:54:00:95:92:3d in network mk-ingress-addon-legacy-283223
	I1206 18:54:14.554469   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetSSHPort
	I1206 18:54:14.554473   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetSSHPort
	I1206 18:54:14.554675   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetSSHKeyPath
	I1206 18:54:14.554674   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetSSHKeyPath
	I1206 18:54:14.554885   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetSSHUsername
	I1206 18:54:14.554897   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetSSHUsername
	I1206 18:54:14.555062   79322 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/ingress-addon-legacy-283223/id_rsa Username:docker}
	I1206 18:54:14.555077   79322 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/ingress-addon-legacy-283223/id_rsa Username:docker}
	I1206 18:54:14.671129   79322 ssh_runner.go:195] Run: systemctl --version
	I1206 18:54:14.677189   79322 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 18:54:14.831432   79322 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 18:54:14.838478   79322 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 18:54:14.838554   79322 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 18:54:14.853951   79322 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1206 18:54:14.853993   79322 start.go:475] detecting cgroup driver to use...
	I1206 18:54:14.854080   79322 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 18:54:14.867278   79322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 18:54:14.880019   79322 docker.go:203] disabling cri-docker service (if available) ...
	I1206 18:54:14.880092   79322 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 18:54:14.892602   79322 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 18:54:14.905491   79322 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 18:54:15.013436   79322 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 18:54:15.131411   79322 docker.go:219] disabling docker service ...
	I1206 18:54:15.131494   79322 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 18:54:15.144377   79322 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 18:54:15.155847   79322 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 18:54:15.265172   79322 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 18:54:15.372620   79322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 18:54:15.385220   79322 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 18:54:15.402096   79322 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1206 18:54:15.402168   79322 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 18:54:15.410954   79322 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 18:54:15.411022   79322 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 18:54:15.419882   79322 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 18:54:15.428765   79322 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 18:54:15.437519   79322 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 18:54:15.446672   79322 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 18:54:15.454664   79322 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1206 18:54:15.454751   79322 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1206 18:54:15.467768   79322 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 18:54:15.475985   79322 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 18:54:15.579660   79322 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 18:54:15.743132   79322 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 18:54:15.743208   79322 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 18:54:15.748291   79322 start.go:543] Will wait 60s for crictl version
	I1206 18:54:15.748370   79322 ssh_runner.go:195] Run: which crictl
	I1206 18:54:15.751987   79322 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1206 18:54:15.793086   79322 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1206 18:54:15.793192   79322 ssh_runner.go:195] Run: crio --version
	I1206 18:54:15.839222   79322 ssh_runner.go:195] Run: crio --version
	I1206 18:54:15.884475   79322 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.1 ...
	I1206 18:54:15.885835   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetIP
	I1206 18:54:15.888819   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | domain ingress-addon-legacy-283223 has defined MAC address 52:54:00:95:92:3d in network mk-ingress-addon-legacy-283223
	I1206 18:54:15.889176   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:92:3d", ip: ""} in network mk-ingress-addon-legacy-283223: {Iface:virbr1 ExpiryTime:2023-12-06 19:54:06 +0000 UTC Type:0 Mac:52:54:00:95:92:3d Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ingress-addon-legacy-283223 Clientid:01:52:54:00:95:92:3d}
	I1206 18:54:15.889207   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | domain ingress-addon-legacy-283223 has defined IP address 192.168.39.55 and MAC address 52:54:00:95:92:3d in network mk-ingress-addon-legacy-283223
	I1206 18:54:15.889430   79322 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1206 18:54:15.893337   79322 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 18:54:15.905491   79322 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1206 18:54:15.905558   79322 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 18:54:15.945957   79322 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1206 18:54:15.946026   79322 ssh_runner.go:195] Run: which lz4
	I1206 18:54:15.949735   79322 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1206 18:54:15.949826   79322 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1206 18:54:15.953786   79322 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1206 18:54:15.953811   79322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I1206 18:54:17.825592   79322 crio.go:444] Took 1.875790 seconds to copy over tarball
	I1206 18:54:17.825695   79322 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1206 18:54:21.021254   79322 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.195502945s)
	I1206 18:54:21.021288   79322 crio.go:451] Took 3.195665 seconds to extract the tarball
	I1206 18:54:21.021301   79322 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1206 18:54:21.064411   79322 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 18:54:21.145448   79322 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1206 18:54:21.145476   79322 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1206 18:54:21.145534   79322 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 18:54:21.145577   79322 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1206 18:54:21.145602   79322 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1206 18:54:21.145637   79322 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1206 18:54:21.145689   79322 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1206 18:54:21.145696   79322 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1206 18:54:21.145773   79322 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1206 18:54:21.145821   79322 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1206 18:54:21.146931   79322 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1206 18:54:21.146974   79322 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1206 18:54:21.147017   79322 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1206 18:54:21.146936   79322 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 18:54:21.146934   79322 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1206 18:54:21.147042   79322 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1206 18:54:21.147139   79322 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1206 18:54:21.147290   79322 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1206 18:54:21.329949   79322 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 18:54:21.346398   79322 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I1206 18:54:21.348960   79322 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I1206 18:54:21.366309   79322 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I1206 18:54:21.386086   79322 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I1206 18:54:21.386219   79322 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I1206 18:54:21.402413   79322 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1206 18:54:21.429212   79322 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I1206 18:54:21.512452   79322 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I1206 18:54:21.512474   79322 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I1206 18:54:21.512499   79322 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1206 18:54:21.512514   79322 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1206 18:54:21.512544   79322 ssh_runner.go:195] Run: which crictl
	I1206 18:54:21.512559   79322 ssh_runner.go:195] Run: which crictl
	I1206 18:54:21.546812   79322 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I1206 18:54:21.546874   79322 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1206 18:54:21.546927   79322 ssh_runner.go:195] Run: which crictl
	I1206 18:54:21.565114   79322 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I1206 18:54:21.565139   79322 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I1206 18:54:21.565164   79322 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I1206 18:54:21.565172   79322 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1206 18:54:21.565213   79322 ssh_runner.go:195] Run: which crictl
	I1206 18:54:21.565223   79322 ssh_runner.go:195] Run: which crictl
	I1206 18:54:21.574472   79322 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1206 18:54:21.574498   79322 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I1206 18:54:21.574516   79322 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1206 18:54:21.574521   79322 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1206 18:54:21.574563   79322 ssh_runner.go:195] Run: which crictl
	I1206 18:54:21.574594   79322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1206 18:54:21.574563   79322 ssh_runner.go:195] Run: which crictl
	I1206 18:54:21.574617   79322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1206 18:54:21.574665   79322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I1206 18:54:21.574711   79322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I1206 18:54:21.574731   79322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I1206 18:54:21.669974   79322 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I1206 18:54:21.691757   79322 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I1206 18:54:21.691866   79322 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I1206 18:54:21.691960   79322 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I1206 18:54:21.692011   79322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1206 18:54:21.697500   79322 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1206 18:54:21.697612   79322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1206 18:54:21.746090   79322 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I1206 18:54:21.746357   79322 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1206 18:54:21.746411   79322 cache_images.go:92] LoadImages completed in 600.923557ms
	W1206 18:54:21.746509   79322 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7: no such file or directory
	I1206 18:54:21.746590   79322 ssh_runner.go:195] Run: crio config
	I1206 18:54:21.810580   79322 cni.go:84] Creating CNI manager for ""
	I1206 18:54:21.810605   79322 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 18:54:21.810624   79322 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1206 18:54:21.810643   79322 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.55 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-283223 NodeName:ingress-addon-legacy-283223 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.55"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.55 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1206 18:54:21.810797   79322 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.55
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-283223"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.55
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.55"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 18:54:21.810874   79322 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=ingress-addon-legacy-283223 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.55
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-283223 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1206 18:54:21.810928   79322 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1206 18:54:21.820506   79322 binaries.go:44] Found k8s binaries, skipping transfer
	I1206 18:54:21.820569   79322 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 18:54:21.829682   79322 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (435 bytes)
	I1206 18:54:21.845295   79322 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1206 18:54:21.860767   79322 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I1206 18:54:21.876684   79322 ssh_runner.go:195] Run: grep 192.168.39.55	control-plane.minikube.internal$ /etc/hosts
	I1206 18:54:21.880675   79322 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.55	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 18:54:21.892125   79322 certs.go:56] Setting up /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223 for IP: 192.168.39.55
	I1206 18:54:21.892158   79322 certs.go:190] acquiring lock for shared ca certs: {Name:mkf8fbf7b590617ef4dc6c3a4acb742ae26f89ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:54:21.892311   79322 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.key
	I1206 18:54:21.892353   79322 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.key
	I1206 18:54:21.892405   79322 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/client.key
	I1206 18:54:21.892429   79322 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/client.crt with IP's: []
	I1206 18:54:22.583494   79322 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/client.crt ...
	I1206 18:54:22.583527   79322 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/client.crt: {Name:mk6742e1a5338c1a2f75048ed15079b5dc9bd807 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:54:22.583728   79322 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/client.key ...
	I1206 18:54:22.583751   79322 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/client.key: {Name:mk088363c1a8bf875f9520307bbc018031be6994 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:54:22.583863   79322 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/apiserver.key.23a33066
	I1206 18:54:22.583881   79322 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/apiserver.crt.23a33066 with IP's: [192.168.39.55 10.96.0.1 127.0.0.1 10.0.0.1]
	I1206 18:54:22.745894   79322 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/apiserver.crt.23a33066 ...
	I1206 18:54:22.745923   79322 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/apiserver.crt.23a33066: {Name:mkc67a1a1f19cde4daeefb6af1398c3abf6493fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:54:22.746106   79322 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/apiserver.key.23a33066 ...
	I1206 18:54:22.746126   79322 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/apiserver.key.23a33066: {Name:mk3a0ae307714fa0070abbe786959f15fa2ceafb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:54:22.746225   79322 certs.go:337] copying /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/apiserver.crt.23a33066 -> /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/apiserver.crt
	I1206 18:54:22.746318   79322 certs.go:341] copying /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/apiserver.key.23a33066 -> /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/apiserver.key
	I1206 18:54:22.746373   79322 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/proxy-client.key
	I1206 18:54:22.746390   79322 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/proxy-client.crt with IP's: []
	I1206 18:54:22.861430   79322 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/proxy-client.crt ...
	I1206 18:54:22.861463   79322 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/proxy-client.crt: {Name:mk0ad2caa16e553aa0b9c033b4a165269fe9b4f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:54:22.861665   79322 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/proxy-client.key ...
	I1206 18:54:22.861686   79322 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/proxy-client.key: {Name:mkcfbaf675f47946df317ed7ad7af7e4e9baf9ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:54:22.861791   79322 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1206 18:54:22.861811   79322 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1206 18:54:22.861821   79322 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1206 18:54:22.861834   79322 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1206 18:54:22.861849   79322 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1206 18:54:22.861862   79322 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1206 18:54:22.861879   79322 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1206 18:54:22.861892   79322 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1206 18:54:22.861942   79322 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834.pem (1338 bytes)
	W1206 18:54:22.861977   79322 certs.go:433] ignoring /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834_empty.pem, impossibly tiny 0 bytes
	I1206 18:54:22.861987   79322 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 18:54:22.862013   79322 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem (1082 bytes)
	I1206 18:54:22.862042   79322 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem (1123 bytes)
	I1206 18:54:22.862070   79322 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem (1679 bytes)
	I1206 18:54:22.862108   79322 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem (1708 bytes)
	I1206 18:54:22.862144   79322 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem -> /usr/share/ca-certificates/708342.pem
	I1206 18:54:22.862163   79322 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1206 18:54:22.862175   79322 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834.pem -> /usr/share/ca-certificates/70834.pem
	I1206 18:54:22.862804   79322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1206 18:54:22.886955   79322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 18:54:22.910472   79322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 18:54:22.933776   79322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 18:54:22.956508   79322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 18:54:22.981543   79322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 18:54:23.004774   79322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 18:54:23.028484   79322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 18:54:23.050619   79322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem --> /usr/share/ca-certificates/708342.pem (1708 bytes)
	I1206 18:54:23.073264   79322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 18:54:23.096620   79322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834.pem --> /usr/share/ca-certificates/70834.pem (1338 bytes)
	I1206 18:54:23.119305   79322 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 18:54:23.135930   79322 ssh_runner.go:195] Run: openssl version
	I1206 18:54:23.141499   79322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1206 18:54:23.151933   79322 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 18:54:23.156643   79322 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  6 18:41 /usr/share/ca-certificates/minikubeCA.pem
	I1206 18:54:23.156772   79322 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 18:54:23.162348   79322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1206 18:54:23.173362   79322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/70834.pem && ln -fs /usr/share/ca-certificates/70834.pem /etc/ssl/certs/70834.pem"
	I1206 18:54:23.184303   79322 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/70834.pem
	I1206 18:54:23.189304   79322 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  6 18:50 /usr/share/ca-certificates/70834.pem
	I1206 18:54:23.189371   79322 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/70834.pem
	I1206 18:54:23.195193   79322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/70834.pem /etc/ssl/certs/51391683.0"
	I1206 18:54:23.205817   79322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/708342.pem && ln -fs /usr/share/ca-certificates/708342.pem /etc/ssl/certs/708342.pem"
	I1206 18:54:23.216582   79322 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/708342.pem
	I1206 18:54:23.221683   79322 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  6 18:50 /usr/share/ca-certificates/708342.pem
	I1206 18:54:23.221757   79322 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/708342.pem
	I1206 18:54:23.229249   79322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/708342.pem /etc/ssl/certs/3ec20f2e.0"
	I1206 18:54:23.240297   79322 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1206 18:54:23.244503   79322 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1206 18:54:23.244558   79322 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-283223 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.18.20 ClusterName:ingress-addon-legacy-283223 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.55 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mou
ntMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 18:54:23.244644   79322 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 18:54:23.244707   79322 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 18:54:23.282414   79322 cri.go:89] found id: ""
	I1206 18:54:23.282505   79322 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 18:54:23.292760   79322 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 18:54:23.302291   79322 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 18:54:23.312116   79322 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 18:54:23.312189   79322 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1206 18:54:23.366474   79322 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1206 18:54:23.366566   79322 kubeadm.go:322] [preflight] Running pre-flight checks
	I1206 18:54:23.496109   79322 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 18:54:23.496235   79322 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 18:54:23.496337   79322 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1206 18:54:23.705531   79322 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 18:54:23.705697   79322 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 18:54:23.705747   79322 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1206 18:54:23.822932   79322 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 18:54:23.981829   79322 out.go:204]   - Generating certificates and keys ...
	I1206 18:54:23.981943   79322 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1206 18:54:23.982056   79322 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1206 18:54:24.034962   79322 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 18:54:24.144349   79322 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1206 18:54:24.425184   79322 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1206 18:54:24.685867   79322 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1206 18:54:25.112724   79322 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1206 18:54:25.112936   79322 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-283223 localhost] and IPs [192.168.39.55 127.0.0.1 ::1]
	I1206 18:54:25.313349   79322 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1206 18:54:25.313535   79322 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-283223 localhost] and IPs [192.168.39.55 127.0.0.1 ::1]
	I1206 18:54:25.498360   79322 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 18:54:25.653963   79322 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 18:54:25.766498   79322 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1206 18:54:25.766662   79322 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 18:54:25.918622   79322 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 18:54:25.979901   79322 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 18:54:26.097671   79322 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 18:54:26.305277   79322 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 18:54:26.306128   79322 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 18:54:26.308016   79322 out.go:204]   - Booting up control plane ...
	I1206 18:54:26.308134   79322 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 18:54:26.313805   79322 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 18:54:26.314839   79322 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 18:54:26.316980   79322 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 18:54:26.319505   79322 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1206 18:54:34.822402   79322 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.503313 seconds
	I1206 18:54:34.822576   79322 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 18:54:34.842626   79322 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 18:54:35.365790   79322 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 18:54:35.365981   79322 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-283223 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1206 18:54:35.874312   79322 kubeadm.go:322] [bootstrap-token] Using token: foimex.ego63fulg3014nsu
	I1206 18:54:35.876053   79322 out.go:204]   - Configuring RBAC rules ...
	I1206 18:54:35.876189   79322 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 18:54:35.887667   79322 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1206 18:54:35.897792   79322 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 18:54:35.900561   79322 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 18:54:35.908460   79322 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 18:54:35.913183   79322 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 18:54:35.927330   79322 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1206 18:54:36.218856   79322 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1206 18:54:36.310503   79322 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1206 18:54:36.311757   79322 kubeadm.go:322] 
	I1206 18:54:36.311854   79322 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1206 18:54:36.311867   79322 kubeadm.go:322] 
	I1206 18:54:36.311949   79322 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1206 18:54:36.311957   79322 kubeadm.go:322] 
	I1206 18:54:36.311980   79322 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1206 18:54:36.312059   79322 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 18:54:36.312130   79322 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 18:54:36.312143   79322 kubeadm.go:322] 
	I1206 18:54:36.312236   79322 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1206 18:54:36.312356   79322 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 18:54:36.312448   79322 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 18:54:36.312456   79322 kubeadm.go:322] 
	I1206 18:54:36.312522   79322 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1206 18:54:36.312592   79322 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1206 18:54:36.312598   79322 kubeadm.go:322] 
	I1206 18:54:36.312670   79322 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token foimex.ego63fulg3014nsu \
	I1206 18:54:36.312767   79322 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:3173d0205a58c67077c3594ee458dc14bc41fcece32682cbfd9ea1126e12b817 \
	I1206 18:54:36.312790   79322 kubeadm.go:322]     --control-plane 
	I1206 18:54:36.312796   79322 kubeadm.go:322] 
	I1206 18:54:36.312862   79322 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1206 18:54:36.312871   79322 kubeadm.go:322] 
	I1206 18:54:36.312935   79322 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token foimex.ego63fulg3014nsu \
	I1206 18:54:36.313040   79322 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:3173d0205a58c67077c3594ee458dc14bc41fcece32682cbfd9ea1126e12b817 
	I1206 18:54:36.313632   79322 kubeadm.go:322] W1206 18:54:23.357749     965 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1206 18:54:36.313738   79322 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 18:54:36.313850   79322 kubeadm.go:322] W1206 18:54:26.307296     965 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1206 18:54:36.313987   79322 kubeadm.go:322] W1206 18:54:26.308388     965 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1206 18:54:36.314011   79322 cni.go:84] Creating CNI manager for ""
	I1206 18:54:36.314021   79322 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 18:54:36.315740   79322 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1206 18:54:36.317329   79322 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1206 18:54:36.352259   79322 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1206 18:54:36.380342   79322 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 18:54:36.380367   79322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:54:36.380397   79322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=31a3600ce72029d920a55140bbc6d0705e357503 minikube.k8s.io/name=ingress-addon-legacy-283223 minikube.k8s.io/updated_at=2023_12_06T18_54_36_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:54:36.630376   79322 ops.go:34] apiserver oom_adj: -16
	I1206 18:54:36.630437   79322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:54:36.856253   79322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:54:37.503268   79322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:54:38.002885   79322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:54:38.503220   79322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:54:39.002958   79322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:54:39.502670   79322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:54:40.003264   79322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:54:40.503328   79322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:54:41.003637   79322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:54:41.502792   79322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:54:42.002751   79322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:54:42.502649   79322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:54:43.002943   79322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:54:43.503311   79322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:54:44.002757   79322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:54:44.503079   79322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:54:45.002636   79322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:54:45.503424   79322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:54:46.003585   79322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:54:46.502638   79322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:54:47.003432   79322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:54:47.503512   79322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:54:48.002711   79322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:54:48.503664   79322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:54:49.003305   79322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:54:49.503273   79322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:54:50.003476   79322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:54:50.503293   79322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:54:51.003049   79322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:54:51.503360   79322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:54:51.793788   79322 kubeadm.go:1088] duration metric: took 15.413473619s to wait for elevateKubeSystemPrivileges.
	I1206 18:54:51.793838   79322 kubeadm.go:406] StartCluster complete in 28.549276498s
	I1206 18:54:51.793867   79322 settings.go:142] acquiring lock: {Name:mkfeb988d43ca5824ac2b3af603600358ae0dd6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:54:51.793977   79322 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17740-63652/kubeconfig
	I1206 18:54:51.794782   79322 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-63652/kubeconfig: {Name:mkb891a2b2c86b4a1b0f4bb8fd4e51233eb9c683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:54:51.795067   79322 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 18:54:51.795210   79322 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1206 18:54:51.795281   79322 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-283223"
	I1206 18:54:51.795303   79322 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-283223"
	I1206 18:54:51.795327   79322 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-283223"
	I1206 18:54:51.795337   79322 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-283223"
	I1206 18:54:51.795358   79322 config.go:182] Loaded profile config "ingress-addon-legacy-283223": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1206 18:54:51.795393   79322 host.go:66] Checking if "ingress-addon-legacy-283223" exists ...
	I1206 18:54:51.795804   79322 kapi.go:59] client config for ingress-addon-legacy-283223: &rest.Config{Host:"https://192.168.39.55:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/client.crt", KeyFile:"/home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/client.key", CAFile:"/home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint
8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 18:54:51.795970   79322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 18:54:51.796003   79322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 18:54:51.795972   79322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 18:54:51.796116   79322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 18:54:51.796564   79322 cert_rotation.go:137] Starting client certificate rotation controller
	I1206 18:54:51.811457   79322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42743
	I1206 18:54:51.811987   79322 main.go:141] libmachine: () Calling .GetVersion
	I1206 18:54:51.812518   79322 main.go:141] libmachine: Using API Version  1
	I1206 18:54:51.812544   79322 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 18:54:51.812906   79322 main.go:141] libmachine: () Calling .GetMachineName
	I1206 18:54:51.813101   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetState
	I1206 18:54:51.815845   79322 kapi.go:59] client config for ingress-addon-legacy-283223: &rest.Config{Host:"https://192.168.39.55:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/client.crt", KeyFile:"/home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/client.key", CAFile:"/home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint
8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 18:54:51.815978   79322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41321
	I1206 18:54:51.816180   79322 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-283223"
	I1206 18:54:51.816221   79322 host.go:66] Checking if "ingress-addon-legacy-283223" exists ...
	I1206 18:54:51.816377   79322 main.go:141] libmachine: () Calling .GetVersion
	I1206 18:54:51.816640   79322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 18:54:51.816673   79322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 18:54:51.816847   79322 main.go:141] libmachine: Using API Version  1
	I1206 18:54:51.816875   79322 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 18:54:51.817220   79322 main.go:141] libmachine: () Calling .GetMachineName
	I1206 18:54:51.817870   79322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 18:54:51.817905   79322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 18:54:51.831224   79322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42347
	I1206 18:54:51.831635   79322 main.go:141] libmachine: () Calling .GetVersion
	I1206 18:54:51.832108   79322 main.go:141] libmachine: Using API Version  1
	I1206 18:54:51.832127   79322 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 18:54:51.832143   79322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46857
	I1206 18:54:51.832477   79322 main.go:141] libmachine: () Calling .GetMachineName
	I1206 18:54:51.832538   79322 main.go:141] libmachine: () Calling .GetVersion
	I1206 18:54:51.832974   79322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 18:54:51.833008   79322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 18:54:51.833013   79322 main.go:141] libmachine: Using API Version  1
	I1206 18:54:51.833026   79322 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 18:54:51.833439   79322 main.go:141] libmachine: () Calling .GetMachineName
	I1206 18:54:51.833641   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetState
	I1206 18:54:51.835554   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .DriverName
	I1206 18:54:51.837753   79322 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 18:54:51.839478   79322 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 18:54:51.839501   79322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 18:54:51.839522   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetSSHHostname
	I1206 18:54:51.842930   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | domain ingress-addon-legacy-283223 has defined MAC address 52:54:00:95:92:3d in network mk-ingress-addon-legacy-283223
	I1206 18:54:51.843451   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:92:3d", ip: ""} in network mk-ingress-addon-legacy-283223: {Iface:virbr1 ExpiryTime:2023-12-06 19:54:06 +0000 UTC Type:0 Mac:52:54:00:95:92:3d Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ingress-addon-legacy-283223 Clientid:01:52:54:00:95:92:3d}
	I1206 18:54:51.843484   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | domain ingress-addon-legacy-283223 has defined IP address 192.168.39.55 and MAC address 52:54:00:95:92:3d in network mk-ingress-addon-legacy-283223
	I1206 18:54:51.843617   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetSSHPort
	I1206 18:54:51.843799   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetSSHKeyPath
	I1206 18:54:51.843982   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetSSHUsername
	I1206 18:54:51.844136   79322 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/ingress-addon-legacy-283223/id_rsa Username:docker}
	I1206 18:54:51.848594   79322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41349
	I1206 18:54:51.848936   79322 main.go:141] libmachine: () Calling .GetVersion
	I1206 18:54:51.849389   79322 main.go:141] libmachine: Using API Version  1
	I1206 18:54:51.849413   79322 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 18:54:51.849695   79322 main.go:141] libmachine: () Calling .GetMachineName
	I1206 18:54:51.849902   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetState
	I1206 18:54:51.851514   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .DriverName
	I1206 18:54:51.851751   79322 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 18:54:51.851768   79322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 18:54:51.851789   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetSSHHostname
	I1206 18:54:51.854402   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | domain ingress-addon-legacy-283223 has defined MAC address 52:54:00:95:92:3d in network mk-ingress-addon-legacy-283223
	I1206 18:54:51.854830   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:92:3d", ip: ""} in network mk-ingress-addon-legacy-283223: {Iface:virbr1 ExpiryTime:2023-12-06 19:54:06 +0000 UTC Type:0 Mac:52:54:00:95:92:3d Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ingress-addon-legacy-283223 Clientid:01:52:54:00:95:92:3d}
	I1206 18:54:51.854873   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | domain ingress-addon-legacy-283223 has defined IP address 192.168.39.55 and MAC address 52:54:00:95:92:3d in network mk-ingress-addon-legacy-283223
	I1206 18:54:51.855035   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetSSHPort
	I1206 18:54:51.855217   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetSSHKeyPath
	I1206 18:54:51.855362   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .GetSSHUsername
	I1206 18:54:51.855501   79322 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/ingress-addon-legacy-283223/id_rsa Username:docker}
	W1206 18:54:51.978555   79322 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-283223" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E1206 18:54:51.978583   79322 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I1206 18:54:51.978606   79322 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.55 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 18:54:51.980233   79322 out.go:177] * Verifying Kubernetes components...
	I1206 18:54:51.981503   79322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 18:54:52.021711   79322 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 18:54:52.066390   79322 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 18:54:52.070040   79322 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 18:54:52.070613   79322 kapi.go:59] client config for ingress-addon-legacy-283223: &rest.Config{Host:"https://192.168.39.55:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/client.crt", KeyFile:"/home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/client.key", CAFile:"/home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint
8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 18:54:52.070889   79322 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-283223" to be "Ready" ...
	I1206 18:54:52.073827   79322 node_ready.go:49] node "ingress-addon-legacy-283223" has status "Ready":"True"
	I1206 18:54:52.073852   79322 node_ready.go:38] duration metric: took 2.944318ms waiting for node "ingress-addon-legacy-283223" to be "Ready" ...
	I1206 18:54:52.073864   79322 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 18:54:52.082080   79322 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-69h4q" in "kube-system" namespace to be "Ready" ...
	I1206 18:54:52.639010   79322 main.go:141] libmachine: Making call to close driver server
	I1206 18:54:52.639039   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .Close
	I1206 18:54:52.639018   79322 main.go:141] libmachine: Making call to close driver server
	I1206 18:54:52.639087   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .Close
	I1206 18:54:52.639324   79322 main.go:141] libmachine: Successfully made call to close driver server
	I1206 18:54:52.639347   79322 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 18:54:52.639352   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | Closing plugin on server side
	I1206 18:54:52.639384   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | Closing plugin on server side
	I1206 18:54:52.639356   79322 main.go:141] libmachine: Making call to close driver server
	I1206 18:54:52.639400   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .Close
	I1206 18:54:52.639414   79322 main.go:141] libmachine: Successfully made call to close driver server
	I1206 18:54:52.639431   79322 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 18:54:52.639442   79322 main.go:141] libmachine: Making call to close driver server
	I1206 18:54:52.639453   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .Close
	I1206 18:54:52.639595   79322 main.go:141] libmachine: Successfully made call to close driver server
	I1206 18:54:52.639617   79322 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 18:54:52.639670   79322 main.go:141] libmachine: Successfully made call to close driver server
	I1206 18:54:52.639680   79322 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 18:54:52.647293   79322 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1206 18:54:52.666652   79322 main.go:141] libmachine: Making call to close driver server
	I1206 18:54:52.666679   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) Calling .Close
	I1206 18:54:52.666941   79322 main.go:141] libmachine: Successfully made call to close driver server
	I1206 18:54:52.666995   79322 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 18:54:52.667001   79322 main.go:141] libmachine: (ingress-addon-legacy-283223) DBG | Closing plugin on server side
	I1206 18:54:52.668811   79322 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1206 18:54:52.670554   79322 addons.go:502] enable addons completed in 875.355636ms: enabled=[storage-provisioner default-storageclass]
	I1206 18:54:54.159323   79322 pod_ready.go:102] pod "coredns-66bff467f8-69h4q" in "kube-system" namespace has status "Ready":"False"
	I1206 18:54:56.614886   79322 pod_ready.go:102] pod "coredns-66bff467f8-69h4q" in "kube-system" namespace has status "Ready":"False"
	I1206 18:54:59.113202   79322 pod_ready.go:102] pod "coredns-66bff467f8-69h4q" in "kube-system" namespace has status "Ready":"False"
	I1206 18:55:01.114549   79322 pod_ready.go:102] pod "coredns-66bff467f8-69h4q" in "kube-system" namespace has status "Ready":"False"
	I1206 18:55:03.613174   79322 pod_ready.go:102] pod "coredns-66bff467f8-69h4q" in "kube-system" namespace has status "Ready":"False"
	I1206 18:55:06.113173   79322 pod_ready.go:102] pod "coredns-66bff467f8-69h4q" in "kube-system" namespace has status "Ready":"False"
	I1206 18:55:08.113766   79322 pod_ready.go:102] pod "coredns-66bff467f8-69h4q" in "kube-system" namespace has status "Ready":"False"
	I1206 18:55:10.613382   79322 pod_ready.go:102] pod "coredns-66bff467f8-69h4q" in "kube-system" namespace has status "Ready":"False"
	I1206 18:55:12.614518   79322 pod_ready.go:102] pod "coredns-66bff467f8-69h4q" in "kube-system" namespace has status "Ready":"False"
	I1206 18:55:15.112563   79322 pod_ready.go:102] pod "coredns-66bff467f8-69h4q" in "kube-system" namespace has status "Ready":"False"
	I1206 18:55:17.612451   79322 pod_ready.go:102] pod "coredns-66bff467f8-69h4q" in "kube-system" namespace has status "Ready":"False"
	I1206 18:55:19.613837   79322 pod_ready.go:102] pod "coredns-66bff467f8-69h4q" in "kube-system" namespace has status "Ready":"False"
	I1206 18:55:21.614962   79322 pod_ready.go:102] pod "coredns-66bff467f8-69h4q" in "kube-system" namespace has status "Ready":"False"
	I1206 18:55:24.126539   79322 pod_ready.go:102] pod "coredns-66bff467f8-69h4q" in "kube-system" namespace has status "Ready":"False"
	I1206 18:55:26.612748   79322 pod_ready.go:102] pod "coredns-66bff467f8-69h4q" in "kube-system" namespace has status "Ready":"False"
	I1206 18:55:28.612936   79322 pod_ready.go:102] pod "coredns-66bff467f8-69h4q" in "kube-system" namespace has status "Ready":"False"
	I1206 18:55:30.613134   79322 pod_ready.go:102] pod "coredns-66bff467f8-69h4q" in "kube-system" namespace has status "Ready":"False"
	I1206 18:55:32.613706   79322 pod_ready.go:102] pod "coredns-66bff467f8-69h4q" in "kube-system" namespace has status "Ready":"False"
	I1206 18:55:35.113598   79322 pod_ready.go:92] pod "coredns-66bff467f8-69h4q" in "kube-system" namespace has status "Ready":"True"
	I1206 18:55:35.113624   79322 pod_ready.go:81] duration metric: took 43.031516443s waiting for pod "coredns-66bff467f8-69h4q" in "kube-system" namespace to be "Ready" ...
	I1206 18:55:35.113637   79322 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-pld8p" in "kube-system" namespace to be "Ready" ...
	I1206 18:55:35.119575   79322 pod_ready.go:92] pod "coredns-66bff467f8-pld8p" in "kube-system" namespace has status "Ready":"True"
	I1206 18:55:35.119596   79322 pod_ready.go:81] duration metric: took 5.950618ms waiting for pod "coredns-66bff467f8-pld8p" in "kube-system" namespace to be "Ready" ...
	I1206 18:55:35.119608   79322 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-283223" in "kube-system" namespace to be "Ready" ...
	I1206 18:55:35.125833   79322 pod_ready.go:92] pod "etcd-ingress-addon-legacy-283223" in "kube-system" namespace has status "Ready":"True"
	I1206 18:55:35.125853   79322 pod_ready.go:81] duration metric: took 6.238646ms waiting for pod "etcd-ingress-addon-legacy-283223" in "kube-system" namespace to be "Ready" ...
	I1206 18:55:35.125864   79322 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-283223" in "kube-system" namespace to be "Ready" ...
	I1206 18:55:35.130436   79322 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-283223" in "kube-system" namespace has status "Ready":"True"
	I1206 18:55:35.130456   79322 pod_ready.go:81] duration metric: took 4.586001ms waiting for pod "kube-apiserver-ingress-addon-legacy-283223" in "kube-system" namespace to be "Ready" ...
	I1206 18:55:35.130468   79322 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-283223" in "kube-system" namespace to be "Ready" ...
	I1206 18:55:35.135237   79322 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-283223" in "kube-system" namespace has status "Ready":"True"
	I1206 18:55:35.135263   79322 pod_ready.go:81] duration metric: took 4.780813ms waiting for pod "kube-controller-manager-ingress-addon-legacy-283223" in "kube-system" namespace to be "Ready" ...
	I1206 18:55:35.135278   79322 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bhb7c" in "kube-system" namespace to be "Ready" ...
	I1206 18:55:35.306854   79322 request.go:629] Waited for 171.476062ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.55:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bhb7c
	I1206 18:55:35.507180   79322 request.go:629] Waited for 196.396343ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.55:8443/api/v1/nodes/ingress-addon-legacy-283223
	I1206 18:55:35.510999   79322 pod_ready.go:92] pod "kube-proxy-bhb7c" in "kube-system" namespace has status "Ready":"True"
	I1206 18:55:35.511024   79322 pod_ready.go:81] duration metric: took 375.738977ms waiting for pod "kube-proxy-bhb7c" in "kube-system" namespace to be "Ready" ...
	I1206 18:55:35.511032   79322 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-283223" in "kube-system" namespace to be "Ready" ...
	I1206 18:55:35.707146   79322 request.go:629] Waited for 196.007272ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.55:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-283223
	I1206 18:55:35.907319   79322 request.go:629] Waited for 196.450164ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.55:8443/api/v1/nodes/ingress-addon-legacy-283223
	I1206 18:55:35.910976   79322 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-283223" in "kube-system" namespace has status "Ready":"True"
	I1206 18:55:35.911000   79322 pod_ready.go:81] duration metric: took 399.961681ms waiting for pod "kube-scheduler-ingress-addon-legacy-283223" in "kube-system" namespace to be "Ready" ...
	I1206 18:55:35.911008   79322 pod_ready.go:38] duration metric: took 43.837134043s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 18:55:35.911047   79322 api_server.go:52] waiting for apiserver process to appear ...
	I1206 18:55:35.911100   79322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 18:55:35.925896   79322 api_server.go:72] duration metric: took 43.947263075s to wait for apiserver process to appear ...
	I1206 18:55:35.925920   79322 api_server.go:88] waiting for apiserver healthz status ...
	I1206 18:55:35.925936   79322 api_server.go:253] Checking apiserver healthz at https://192.168.39.55:8443/healthz ...
	I1206 18:55:35.932427   79322 api_server.go:279] https://192.168.39.55:8443/healthz returned 200:
	ok
	I1206 18:55:35.933579   79322 api_server.go:141] control plane version: v1.18.20
	I1206 18:55:35.933604   79322 api_server.go:131] duration metric: took 7.677195ms to wait for apiserver health ...
	I1206 18:55:35.933616   79322 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 18:55:36.107241   79322 request.go:629] Waited for 173.513528ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.55:8443/api/v1/namespaces/kube-system/pods
	I1206 18:55:36.114302   79322 system_pods.go:59] 8 kube-system pods found
	I1206 18:55:36.114332   79322 system_pods.go:61] "coredns-66bff467f8-69h4q" [e070c4f8-b704-489e-96ea-29e73cc2b607] Running
	I1206 18:55:36.114336   79322 system_pods.go:61] "coredns-66bff467f8-pld8p" [ca0f3a3d-8a58-4681-bb97-58e0767e5587] Running
	I1206 18:55:36.114341   79322 system_pods.go:61] "etcd-ingress-addon-legacy-283223" [661ebf33-ce59-46cc-af86-86bd42a3c732] Running
	I1206 18:55:36.114347   79322 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-283223" [0e5b95b9-82b9-44ea-af9c-439990bee1d0] Running
	I1206 18:55:36.114352   79322 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-283223" [4a5bfef4-ed68-4bae-b67c-714387c1cb58] Running
	I1206 18:55:36.114355   79322 system_pods.go:61] "kube-proxy-bhb7c" [dad65a75-b29e-4be7-8456-24c4ad7b0337] Running
	I1206 18:55:36.114359   79322 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-283223" [2751698e-d453-48cd-aab3-b7059daa146d] Running
	I1206 18:55:36.114363   79322 system_pods.go:61] "storage-provisioner" [ab8d4d0d-dc5f-4b4b-8f47-479ed2abf7c0] Running
	I1206 18:55:36.114369   79322 system_pods.go:74] duration metric: took 180.747435ms to wait for pod list to return data ...
	I1206 18:55:36.114376   79322 default_sa.go:34] waiting for default service account to be created ...
	I1206 18:55:36.306864   79322 request.go:629] Waited for 192.383878ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.55:8443/api/v1/namespaces/default/serviceaccounts
	I1206 18:55:36.310787   79322 default_sa.go:45] found service account: "default"
	I1206 18:55:36.310815   79322 default_sa.go:55] duration metric: took 196.429167ms for default service account to be created ...
	I1206 18:55:36.310825   79322 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 18:55:36.507256   79322 request.go:629] Waited for 196.358754ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.55:8443/api/v1/namespaces/kube-system/pods
	I1206 18:55:36.513260   79322 system_pods.go:86] 8 kube-system pods found
	I1206 18:55:36.513286   79322 system_pods.go:89] "coredns-66bff467f8-69h4q" [e070c4f8-b704-489e-96ea-29e73cc2b607] Running
	I1206 18:55:36.513292   79322 system_pods.go:89] "coredns-66bff467f8-pld8p" [ca0f3a3d-8a58-4681-bb97-58e0767e5587] Running
	I1206 18:55:36.513296   79322 system_pods.go:89] "etcd-ingress-addon-legacy-283223" [661ebf33-ce59-46cc-af86-86bd42a3c732] Running
	I1206 18:55:36.513301   79322 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-283223" [0e5b95b9-82b9-44ea-af9c-439990bee1d0] Running
	I1206 18:55:36.513305   79322 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-283223" [4a5bfef4-ed68-4bae-b67c-714387c1cb58] Running
	I1206 18:55:36.513311   79322 system_pods.go:89] "kube-proxy-bhb7c" [dad65a75-b29e-4be7-8456-24c4ad7b0337] Running
	I1206 18:55:36.513315   79322 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-283223" [2751698e-d453-48cd-aab3-b7059daa146d] Running
	I1206 18:55:36.513319   79322 system_pods.go:89] "storage-provisioner" [ab8d4d0d-dc5f-4b4b-8f47-479ed2abf7c0] Running
	I1206 18:55:36.513326   79322 system_pods.go:126] duration metric: took 202.494784ms to wait for k8s-apps to be running ...
	I1206 18:55:36.513333   79322 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 18:55:36.513377   79322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 18:55:36.526174   79322 system_svc.go:56] duration metric: took 12.827834ms WaitForService to wait for kubelet.
	I1206 18:55:36.526205   79322 kubeadm.go:581] duration metric: took 44.547577855s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1206 18:55:36.526230   79322 node_conditions.go:102] verifying NodePressure condition ...
	I1206 18:55:36.706609   79322 request.go:629] Waited for 180.288976ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.55:8443/api/v1/nodes
	I1206 18:55:36.710495   79322 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1206 18:55:36.710524   79322 node_conditions.go:123] node cpu capacity is 2
	I1206 18:55:36.710537   79322 node_conditions.go:105] duration metric: took 184.302182ms to run NodePressure ...
	I1206 18:55:36.710549   79322 start.go:228] waiting for startup goroutines ...
	I1206 18:55:36.710555   79322 start.go:233] waiting for cluster config update ...
	I1206 18:55:36.710565   79322 start.go:242] writing updated cluster config ...
	I1206 18:55:36.710823   79322 ssh_runner.go:195] Run: rm -f paused
	I1206 18:55:36.760910   79322 start.go:600] kubectl: 1.28.4, cluster: 1.18.20 (minor skew: 10)
	I1206 18:55:36.763037   79322 out.go:177] 
	W1206 18:55:36.764591   79322 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.18.20.
	I1206 18:55:36.765937   79322 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I1206 18:55:36.767420   79322 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-283223" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-12-06 18:54:02 UTC, ends at Wed 2023-12-06 18:58:35 UTC. --
	Dec 06 18:58:35 ingress-addon-legacy-283223 crio[722]: time="2023-12-06 18:58:35.165648445Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701889115165636994,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202351,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=9cfab233-5534-414e-8f48-d820c5407ae0 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 18:58:35 ingress-addon-legacy-283223 crio[722]: time="2023-12-06 18:58:35.166261376Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=bd498671-8918-4ee8-8573-a17f57c7311a name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 18:58:35 ingress-addon-legacy-283223 crio[722]: time="2023-12-06 18:58:35.166308616Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=bd498671-8918-4ee8-8573-a17f57c7311a name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 18:58:35 ingress-addon-legacy-283223 crio[722]: time="2023-12-06 18:58:35.166601220Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9d663105dc760ee37c3ec1368d7acdca4bfd8c4c81e68e9f7ab0b58a421efc54,PodSandboxId:776d7e3ba5d5b57c09ecc6457745f49fb59ce89d51ca002ed0bf1b654ce11255,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1701889104726961158,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-p44s7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4fe7590e-a43a-423a-ba26-23786392b795,},Annotations:map[string]string{io.kubernetes.container.hash: 587c1275,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3697632ac98006f51153a901553be7a56bdbffc27f87c50c4681029ca868d0a1,PodSandboxId:4116581b894232baf3ae63031a35e20496eec5e1fe578d6f41d7df751269e211,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,State:CONTAINER_RUNNING,CreatedAt:1701888964659737168,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9e94db7c-b3d5-43a7-87d6-7d33def921e1,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 87846c81,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8440cb6950349327d7a5dba304c0d0d098acce920380a512a5bee64c30047f5f,PodSandboxId:77318ae1b8dd9b1c497ac204b74e0eaa81f7c9b093e09401b5e398bf1f193656,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1701888949339909963,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-czpzk,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3fc3b8a3-6551-4fe2-9684-3d08e098f28d,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfa0b6,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:2affd6b6e09f1ff04db9449687f7cf542141f283fad4edadeb00d797822a3fe7,PodSandboxId:298f2981c397faba3b9b7fa8b7b7d667e48961078213380460e97fc33365835d,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1701888941368657027,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-tgbbr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4c98dfce-9575-4f43-bc30-f7480cc118e7,},Annotations:map[string]string{io.kubernetes.container.hash: ab6e3b8f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5699dff6f36eb0fc761b7283efca431791e57f09c364d30938ccc22a7ff21298,PodSandboxId:f239a31ec5ba5b8e23fe56cf400ecf338f2f923f284cf90e4427382d36f38a0b,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1701888940941504378,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-7vm9p,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 367c6028-1f96-4d2a-99b6-ccc71343fd36,},Annotations:map[string]string{io.kubernetes.container.hash: 9fb7d9ef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac85f652cd581149e21e2284b8aa4b069326bd17a194b2455b383e875a698c06,PodSandboxId:95f418ed9129e7227434151cd0cd8a6c9805fd1e9c83d340ef7c401da3f37a5e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701888924033700581,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab8d4d0d-dc5f-4b4b-8f47-479ed2abf7c0,},Annotations:map[string]string{io.kubernetes.container.hash: cc3126df,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f3d2fb747e2e455d3519b34a6e6a7b0b6a006e435397e873403aa2773485b57,PodSandboxId:f4874a5ef8140dcd9e76e47e86561fa60300268f6d9ba36c502d45fa741263e0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSp
ec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1701888894372520495,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bhb7c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dad65a75-b29e-4be7-8456-24c4ad7b0337,},Annotations:map[string]string{io.kubernetes.container.hash: dec7ca16,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caff21954192a38bfe83487d2641eff07ecab57a181cc67bdc9e72f68730a9f8,PodSandboxId:21d2f471ae3a490778ecb053d25beee11b5a4f110c3a1c297a34493745e037ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b0
0a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1701888893996168229,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-69h4q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e070c4f8-b704-489e-96ea-29e73cc2b607,},Annotations:map[string]string{io.kubernetes.container.hash: edc2d392,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4da462cfecebb88bb7b0cec35c0fef391cfcb5b08f112064bd7671994e5231aa,Pod
SandboxId:892884eb28a0c6013a11c7b6e7327ff0858d4d904aba4f2f80047cb1bda4e42a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1701888893944474312,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-pld8p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca0f3a3d-8a58-4681-bb97-58e0767e5587,},Annotations:map[string]string{io.kubernetes.container.hash: edc2d392,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:932eedac8f0dc19c2a57b67fe003b9d700d0b5fa8719cf6fa009049e0b14387b,PodSandboxId:95f418ed9129e7227434151cd0cd8a6c9805fd1e9c83d340ef7c401da3f37a5e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1701888893302234045,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab8d4d0d-dc5f-4b4b-8f47-479ed2abf7c0,},Annotations:map[string]string{io.kubernetes.container.hash: cc3126df,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9928ab6457c858a258bf2925993b7e58f1af25c13b0e092771f0aa85a8850b9,PodSandboxId:ba8c0d3557745fe2fdd0169702ef853ac89c0efd5b38e0de963bc1f8f119be2b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1701888869542513569,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-283223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9f66c70d6739e3dbb45911d7f657326,},Annotations:map[string]string{io.kubernetes.container.hash: 380fbdb8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8aaa50e09f902643ea2d7c8e749f27794f5c2db614cda1a42264c12fdced9698,PodSandboxId:3b238264858a5c44d7d56f19ac926c88b6c41175bf808c47192e24dddf9b6333,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1701888868465416468,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-283223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3551d61fcebe1ee1e04473d43b674e04d5b885066dafa093ee49b4922af6ac15,PodSandboxId:07f8f1cd1040572d54bad4811e5ab5b8df858cf4d7f9cc52ca447217870974c3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1701888868304117212,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-283223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb0d66b4f186d6a359054550250a9247,},Annotations:map[string]string{io.kubernetes.container.hash: 9217489d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kub
ernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d85138efe9a8b4a763dec02fe3d3c2672ff934f979e36ab01130f253ac2c254f,PodSandboxId:343f2e6f3e7772901112ce9f347ce5bbc42783c3797a85c6dad77551e3d08f32,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1701888868195636623,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-283223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=bd498671-8918-4ee8-8573-a17f57c7311a name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 18:58:35 ingress-addon-legacy-283223 crio[722]: time="2023-12-06 18:58:35.214286659Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=966866d1-a315-4af3-9338-42c1de5f9a6d name=/runtime.v1.RuntimeService/Version
	Dec 06 18:58:35 ingress-addon-legacy-283223 crio[722]: time="2023-12-06 18:58:35.214371579Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=966866d1-a315-4af3-9338-42c1de5f9a6d name=/runtime.v1.RuntimeService/Version
	Dec 06 18:58:35 ingress-addon-legacy-283223 crio[722]: time="2023-12-06 18:58:35.216027213Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=6a20f0a9-d4f5-4b56-8ef9-576a7fbece77 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 18:58:35 ingress-addon-legacy-283223 crio[722]: time="2023-12-06 18:58:35.216484538Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701889115216473571,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202351,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=6a20f0a9-d4f5-4b56-8ef9-576a7fbece77 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 18:58:35 ingress-addon-legacy-283223 crio[722]: time="2023-12-06 18:58:35.217075907Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=adf90df1-3dbd-4903-ae8c-2fc68743c3ec name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 18:58:35 ingress-addon-legacy-283223 crio[722]: time="2023-12-06 18:58:35.217124080Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=adf90df1-3dbd-4903-ae8c-2fc68743c3ec name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 18:58:35 ingress-addon-legacy-283223 crio[722]: time="2023-12-06 18:58:35.217571352Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9d663105dc760ee37c3ec1368d7acdca4bfd8c4c81e68e9f7ab0b58a421efc54,PodSandboxId:776d7e3ba5d5b57c09ecc6457745f49fb59ce89d51ca002ed0bf1b654ce11255,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1701889104726961158,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-p44s7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4fe7590e-a43a-423a-ba26-23786392b795,},Annotations:map[string]string{io.kubernetes.container.hash: 587c1275,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3697632ac98006f51153a901553be7a56bdbffc27f87c50c4681029ca868d0a1,PodSandboxId:4116581b894232baf3ae63031a35e20496eec5e1fe578d6f41d7df751269e211,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,State:CONTAINER_RUNNING,CreatedAt:1701888964659737168,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9e94db7c-b3d5-43a7-87d6-7d33def921e1,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 87846c81,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8440cb6950349327d7a5dba304c0d0d098acce920380a512a5bee64c30047f5f,PodSandboxId:77318ae1b8dd9b1c497ac204b74e0eaa81f7c9b093e09401b5e398bf1f193656,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1701888949339909963,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-czpzk,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3fc3b8a3-6551-4fe2-9684-3d08e098f28d,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfa0b6,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:2affd6b6e09f1ff04db9449687f7cf542141f283fad4edadeb00d797822a3fe7,PodSandboxId:298f2981c397faba3b9b7fa8b7b7d667e48961078213380460e97fc33365835d,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1701888941368657027,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-tgbbr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4c98dfce-9575-4f43-bc30-f7480cc118e7,},Annotations:map[string]string{io.kubernetes.container.hash: ab6e3b8f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5699dff6f36eb0fc761b7283efca431791e57f09c364d30938ccc22a7ff21298,PodSandboxId:f239a31ec5ba5b8e23fe56cf400ecf338f2f923f284cf90e4427382d36f38a0b,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1701888940941504378,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-7vm9p,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 367c6028-1f96-4d2a-99b6-ccc71343fd36,},Annotations:map[string]string{io.kubernetes.container.hash: 9fb7d9ef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac85f652cd581149e21e2284b8aa4b069326bd17a194b2455b383e875a698c06,PodSandboxId:95f418ed9129e7227434151cd0cd8a6c9805fd1e9c83d340ef7c401da3f37a5e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701888924033700581,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab8d4d0d-dc5f-4b4b-8f47-479ed2abf7c0,},Annotations:map[string]string{io.kubernetes.container.hash: cc3126df,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f3d2fb747e2e455d3519b34a6e6a7b0b6a006e435397e873403aa2773485b57,PodSandboxId:f4874a5ef8140dcd9e76e47e86561fa60300268f6d9ba36c502d45fa741263e0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSp
ec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1701888894372520495,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bhb7c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dad65a75-b29e-4be7-8456-24c4ad7b0337,},Annotations:map[string]string{io.kubernetes.container.hash: dec7ca16,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caff21954192a38bfe83487d2641eff07ecab57a181cc67bdc9e72f68730a9f8,PodSandboxId:21d2f471ae3a490778ecb053d25beee11b5a4f110c3a1c297a34493745e037ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b0
0a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1701888893996168229,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-69h4q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e070c4f8-b704-489e-96ea-29e73cc2b607,},Annotations:map[string]string{io.kubernetes.container.hash: edc2d392,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4da462cfecebb88bb7b0cec35c0fef391cfcb5b08f112064bd7671994e5231aa,Pod
SandboxId:892884eb28a0c6013a11c7b6e7327ff0858d4d904aba4f2f80047cb1bda4e42a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1701888893944474312,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-pld8p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca0f3a3d-8a58-4681-bb97-58e0767e5587,},Annotations:map[string]string{io.kubernetes.container.hash: edc2d392,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:932eedac8f0dc19c2a57b67fe003b9d700d0b5fa8719cf6fa009049e0b14387b,PodSandboxId:95f418ed9129e7227434151cd0cd8a6c9805fd1e9c83d340ef7c401da3f37a5e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1701888893302234045,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab8d4d0d-dc5f-4b4b-8f47-479ed2abf7c0,},Annotations:map[string]string{io.kubernetes.container.hash: cc3126df,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9928ab6457c858a258bf2925993b7e58f1af25c13b0e092771f0aa85a8850b9,PodSandboxId:ba8c0d3557745fe2fdd0169702ef853ac89c0efd5b38e0de963bc1f8f119be2b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1701888869542513569,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-283223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9f66c70d6739e3dbb45911d7f657326,},Annotations:map[string]string{io.kubernetes.container.hash: 380fbdb8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8aaa50e09f902643ea2d7c8e749f27794f5c2db614cda1a42264c12fdced9698,PodSandboxId:3b238264858a5c44d7d56f19ac926c88b6c41175bf808c47192e24dddf9b6333,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1701888868465416468,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-283223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3551d61fcebe1ee1e04473d43b674e04d5b885066dafa093ee49b4922af6ac15,PodSandboxId:07f8f1cd1040572d54bad4811e5ab5b8df858cf4d7f9cc52ca447217870974c3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1701888868304117212,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-283223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb0d66b4f186d6a359054550250a9247,},Annotations:map[string]string{io.kubernetes.container.hash: 9217489d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kub
ernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d85138efe9a8b4a763dec02fe3d3c2672ff934f979e36ab01130f253ac2c254f,PodSandboxId:343f2e6f3e7772901112ce9f347ce5bbc42783c3797a85c6dad77551e3d08f32,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1701888868195636623,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-283223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=adf90df1-3dbd-4903-ae8c-2fc68743c3ec name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 18:58:35 ingress-addon-legacy-283223 crio[722]: time="2023-12-06 18:58:35.255496762Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=e215cd70-0389-4faf-9258-2e862bf7d099 name=/runtime.v1.RuntimeService/Version
	Dec 06 18:58:35 ingress-addon-legacy-283223 crio[722]: time="2023-12-06 18:58:35.255556500Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=e215cd70-0389-4faf-9258-2e862bf7d099 name=/runtime.v1.RuntimeService/Version
	Dec 06 18:58:35 ingress-addon-legacy-283223 crio[722]: time="2023-12-06 18:58:35.256333097Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=f479c729-d56c-4882-b192-672c2dc692d9 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 18:58:35 ingress-addon-legacy-283223 crio[722]: time="2023-12-06 18:58:35.256960031Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701889115256938426,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202351,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=f479c729-d56c-4882-b192-672c2dc692d9 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 18:58:35 ingress-addon-legacy-283223 crio[722]: time="2023-12-06 18:58:35.257886262Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=02072359-59ce-4654-a09d-ef68faf654e2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 18:58:35 ingress-addon-legacy-283223 crio[722]: time="2023-12-06 18:58:35.257941886Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=02072359-59ce-4654-a09d-ef68faf654e2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 18:58:35 ingress-addon-legacy-283223 crio[722]: time="2023-12-06 18:58:35.258241562Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9d663105dc760ee37c3ec1368d7acdca4bfd8c4c81e68e9f7ab0b58a421efc54,PodSandboxId:776d7e3ba5d5b57c09ecc6457745f49fb59ce89d51ca002ed0bf1b654ce11255,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1701889104726961158,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-p44s7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4fe7590e-a43a-423a-ba26-23786392b795,},Annotations:map[string]string{io.kubernetes.container.hash: 587c1275,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3697632ac98006f51153a901553be7a56bdbffc27f87c50c4681029ca868d0a1,PodSandboxId:4116581b894232baf3ae63031a35e20496eec5e1fe578d6f41d7df751269e211,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,State:CONTAINER_RUNNING,CreatedAt:1701888964659737168,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9e94db7c-b3d5-43a7-87d6-7d33def921e1,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 87846c81,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8440cb6950349327d7a5dba304c0d0d098acce920380a512a5bee64c30047f5f,PodSandboxId:77318ae1b8dd9b1c497ac204b74e0eaa81f7c9b093e09401b5e398bf1f193656,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1701888949339909963,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-czpzk,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3fc3b8a3-6551-4fe2-9684-3d08e098f28d,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfa0b6,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:2affd6b6e09f1ff04db9449687f7cf542141f283fad4edadeb00d797822a3fe7,PodSandboxId:298f2981c397faba3b9b7fa8b7b7d667e48961078213380460e97fc33365835d,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1701888941368657027,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-tgbbr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4c98dfce-9575-4f43-bc30-f7480cc118e7,},Annotations:map[string]string{io.kubernetes.container.hash: ab6e3b8f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5699dff6f36eb0fc761b7283efca431791e57f09c364d30938ccc22a7ff21298,PodSandboxId:f239a31ec5ba5b8e23fe56cf400ecf338f2f923f284cf90e4427382d36f38a0b,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1701888940941504378,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-7vm9p,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 367c6028-1f96-4d2a-99b6-ccc71343fd36,},Annotations:map[string]string{io.kubernetes.container.hash: 9fb7d9ef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac85f652cd581149e21e2284b8aa4b069326bd17a194b2455b383e875a698c06,PodSandboxId:95f418ed9129e7227434151cd0cd8a6c9805fd1e9c83d340ef7c401da3f37a5e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701888924033700581,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab8d4d0d-dc5f-4b4b-8f47-479ed2abf7c0,},Annotations:map[string]string{io.kubernetes.container.hash: cc3126df,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f3d2fb747e2e455d3519b34a6e6a7b0b6a006e435397e873403aa2773485b57,PodSandboxId:f4874a5ef8140dcd9e76e47e86561fa60300268f6d9ba36c502d45fa741263e0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSp
ec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1701888894372520495,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bhb7c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dad65a75-b29e-4be7-8456-24c4ad7b0337,},Annotations:map[string]string{io.kubernetes.container.hash: dec7ca16,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caff21954192a38bfe83487d2641eff07ecab57a181cc67bdc9e72f68730a9f8,PodSandboxId:21d2f471ae3a490778ecb053d25beee11b5a4f110c3a1c297a34493745e037ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b0
0a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1701888893996168229,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-69h4q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e070c4f8-b704-489e-96ea-29e73cc2b607,},Annotations:map[string]string{io.kubernetes.container.hash: edc2d392,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4da462cfecebb88bb7b0cec35c0fef391cfcb5b08f112064bd7671994e5231aa,Pod
SandboxId:892884eb28a0c6013a11c7b6e7327ff0858d4d904aba4f2f80047cb1bda4e42a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1701888893944474312,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-pld8p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca0f3a3d-8a58-4681-bb97-58e0767e5587,},Annotations:map[string]string{io.kubernetes.container.hash: edc2d392,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:932eedac8f0dc19c2a57b67fe003b9d700d0b5fa8719cf6fa009049e0b14387b,PodSandboxId:95f418ed9129e7227434151cd0cd8a6c9805fd1e9c83d340ef7c401da3f37a5e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1701888893302234045,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab8d4d0d-dc5f-4b4b-8f47-479ed2abf7c0,},Annotations:map[string]string{io.kubernetes.container.hash: cc3126df,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9928ab6457c858a258bf2925993b7e58f1af25c13b0e092771f0aa85a8850b9,PodSandboxId:ba8c0d3557745fe2fdd0169702ef853ac89c0efd5b38e0de963bc1f8f119be2b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1701888869542513569,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-283223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9f66c70d6739e3dbb45911d7f657326,},Annotations:map[string]string{io.kubernetes.container.hash: 380fbdb8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8aaa50e09f902643ea2d7c8e749f27794f5c2db614cda1a42264c12fdced9698,PodSandboxId:3b238264858a5c44d7d56f19ac926c88b6c41175bf808c47192e24dddf9b6333,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1701888868465416468,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-283223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3551d61fcebe1ee1e04473d43b674e04d5b885066dafa093ee49b4922af6ac15,PodSandboxId:07f8f1cd1040572d54bad4811e5ab5b8df858cf4d7f9cc52ca447217870974c3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1701888868304117212,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-283223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb0d66b4f186d6a359054550250a9247,},Annotations:map[string]string{io.kubernetes.container.hash: 9217489d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kub
ernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d85138efe9a8b4a763dec02fe3d3c2672ff934f979e36ab01130f253ac2c254f,PodSandboxId:343f2e6f3e7772901112ce9f347ce5bbc42783c3797a85c6dad77551e3d08f32,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1701888868195636623,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-283223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=02072359-59ce-4654-a09d-ef68faf654e2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 18:58:35 ingress-addon-legacy-283223 crio[722]: time="2023-12-06 18:58:35.291362679Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=9fd71d03-bb72-49ce-8414-097dfbf46b24 name=/runtime.v1.RuntimeService/Version
	Dec 06 18:58:35 ingress-addon-legacy-283223 crio[722]: time="2023-12-06 18:58:35.291420367Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=9fd71d03-bb72-49ce-8414-097dfbf46b24 name=/runtime.v1.RuntimeService/Version
	Dec 06 18:58:35 ingress-addon-legacy-283223 crio[722]: time="2023-12-06 18:58:35.292733499Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=504b22d9-3384-4b56-a4c7-d368f76799c9 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 18:58:35 ingress-addon-legacy-283223 crio[722]: time="2023-12-06 18:58:35.293258808Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701889115293242985,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202351,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=504b22d9-3384-4b56-a4c7-d368f76799c9 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 18:58:35 ingress-addon-legacy-283223 crio[722]: time="2023-12-06 18:58:35.293937754Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=426295a8-d91e-4c2c-83f0-1195dcc14983 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 18:58:35 ingress-addon-legacy-283223 crio[722]: time="2023-12-06 18:58:35.293983327Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=426295a8-d91e-4c2c-83f0-1195dcc14983 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 18:58:35 ingress-addon-legacy-283223 crio[722]: time="2023-12-06 18:58:35.294316894Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9d663105dc760ee37c3ec1368d7acdca4bfd8c4c81e68e9f7ab0b58a421efc54,PodSandboxId:776d7e3ba5d5b57c09ecc6457745f49fb59ce89d51ca002ed0bf1b654ce11255,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1701889104726961158,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-p44s7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4fe7590e-a43a-423a-ba26-23786392b795,},Annotations:map[string]string{io.kubernetes.container.hash: 587c1275,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3697632ac98006f51153a901553be7a56bdbffc27f87c50c4681029ca868d0a1,PodSandboxId:4116581b894232baf3ae63031a35e20496eec5e1fe578d6f41d7df751269e211,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,State:CONTAINER_RUNNING,CreatedAt:1701888964659737168,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9e94db7c-b3d5-43a7-87d6-7d33def921e1,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 87846c81,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8440cb6950349327d7a5dba304c0d0d098acce920380a512a5bee64c30047f5f,PodSandboxId:77318ae1b8dd9b1c497ac204b74e0eaa81f7c9b093e09401b5e398bf1f193656,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1701888949339909963,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-czpzk,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3fc3b8a3-6551-4fe2-9684-3d08e098f28d,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfa0b6,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:2affd6b6e09f1ff04db9449687f7cf542141f283fad4edadeb00d797822a3fe7,PodSandboxId:298f2981c397faba3b9b7fa8b7b7d667e48961078213380460e97fc33365835d,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1701888941368657027,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-tgbbr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4c98dfce-9575-4f43-bc30-f7480cc118e7,},Annotations:map[string]string{io.kubernetes.container.hash: ab6e3b8f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5699dff6f36eb0fc761b7283efca431791e57f09c364d30938ccc22a7ff21298,PodSandboxId:f239a31ec5ba5b8e23fe56cf400ecf338f2f923f284cf90e4427382d36f38a0b,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1701888940941504378,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-7vm9p,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 367c6028-1f96-4d2a-99b6-ccc71343fd36,},Annotations:map[string]string{io.kubernetes.container.hash: 9fb7d9ef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac85f652cd581149e21e2284b8aa4b069326bd17a194b2455b383e875a698c06,PodSandboxId:95f418ed9129e7227434151cd0cd8a6c9805fd1e9c83d340ef7c401da3f37a5e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701888924033700581,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab8d4d0d-dc5f-4b4b-8f47-479ed2abf7c0,},Annotations:map[string]string{io.kubernetes.container.hash: cc3126df,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f3d2fb747e2e455d3519b34a6e6a7b0b6a006e435397e873403aa2773485b57,PodSandboxId:f4874a5ef8140dcd9e76e47e86561fa60300268f6d9ba36c502d45fa741263e0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSp
ec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1701888894372520495,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bhb7c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dad65a75-b29e-4be7-8456-24c4ad7b0337,},Annotations:map[string]string{io.kubernetes.container.hash: dec7ca16,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caff21954192a38bfe83487d2641eff07ecab57a181cc67bdc9e72f68730a9f8,PodSandboxId:21d2f471ae3a490778ecb053d25beee11b5a4f110c3a1c297a34493745e037ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b0
0a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1701888893996168229,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-69h4q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e070c4f8-b704-489e-96ea-29e73cc2b607,},Annotations:map[string]string{io.kubernetes.container.hash: edc2d392,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4da462cfecebb88bb7b0cec35c0fef391cfcb5b08f112064bd7671994e5231aa,Pod
SandboxId:892884eb28a0c6013a11c7b6e7327ff0858d4d904aba4f2f80047cb1bda4e42a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1701888893944474312,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-pld8p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca0f3a3d-8a58-4681-bb97-58e0767e5587,},Annotations:map[string]string{io.kubernetes.container.hash: edc2d392,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:932eedac8f0dc19c2a57b67fe003b9d700d0b5fa8719cf6fa009049e0b14387b,PodSandboxId:95f418ed9129e7227434151cd0cd8a6c9805fd1e9c83d340ef7c401da3f37a5e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1701888893302234045,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab8d4d0d-dc5f-4b4b-8f47-479ed2abf7c0,},Annotations:map[string]string{io.kubernetes.container.hash: cc3126df,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9928ab6457c858a258bf2925993b7e58f1af25c13b0e092771f0aa85a8850b9,PodSandboxId:ba8c0d3557745fe2fdd0169702ef853ac89c0efd5b38e0de963bc1f8f119be2b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1701888869542513569,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-283223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9f66c70d6739e3dbb45911d7f657326,},Annotations:map[string]string{io.kubernetes.container.hash: 380fbdb8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8aaa50e09f902643ea2d7c8e749f27794f5c2db614cda1a42264c12fdced9698,PodSandboxId:3b238264858a5c44d7d56f19ac926c88b6c41175bf808c47192e24dddf9b6333,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1701888868465416468,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-283223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3551d61fcebe1ee1e04473d43b674e04d5b885066dafa093ee49b4922af6ac15,PodSandboxId:07f8f1cd1040572d54bad4811e5ab5b8df858cf4d7f9cc52ca447217870974c3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1701888868304117212,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-283223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb0d66b4f186d6a359054550250a9247,},Annotations:map[string]string{io.kubernetes.container.hash: 9217489d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kub
ernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d85138efe9a8b4a763dec02fe3d3c2672ff934f979e36ab01130f253ac2c254f,PodSandboxId:343f2e6f3e7772901112ce9f347ce5bbc42783c3797a85c6dad77551e3d08f32,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1701888868195636623,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-283223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=426295a8-d91e-4c2c-83f0-1195dcc14983 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9d663105dc760       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7            10 seconds ago      Running             hello-world-app           0                   776d7e3ba5d5b       hello-world-app-5f5d8b66bb-p44s7
	3697632ac9800       docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc                    2 minutes ago       Running             nginx                     0                   4116581b89423       nginx
	8440cb6950349       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   2 minutes ago       Exited              controller                0                   77318ae1b8dd9       ingress-nginx-controller-7fcf777cb7-czpzk
	2affd6b6e09f1       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     2 minutes ago       Exited              patch                     0                   298f2981c397f       ingress-nginx-admission-patch-tgbbr
	5699dff6f36eb       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     2 minutes ago       Exited              create                    0                   f239a31ec5ba5       ingress-nginx-admission-create-7vm9p
	ac85f652cd581       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Running             storage-provisioner       1                   95f418ed9129e       storage-provisioner
	8f3d2fb747e2e       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                                   3 minutes ago       Running             kube-proxy                0                   f4874a5ef8140       kube-proxy-bhb7c
	caff21954192a       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   3 minutes ago       Running             coredns                   0                   21d2f471ae3a4       coredns-66bff467f8-69h4q
	4da462cfecebb       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   3 minutes ago       Running             coredns                   0                   892884eb28a0c       coredns-66bff467f8-pld8p
	932eedac8f0dc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Exited              storage-provisioner       0                   95f418ed9129e       storage-provisioner
	c9928ab6457c8       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                                   4 minutes ago       Running             etcd                      0                   ba8c0d3557745       etcd-ingress-addon-legacy-283223
	8aaa50e09f902       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                                   4 minutes ago       Running             kube-scheduler            0                   3b238264858a5       kube-scheduler-ingress-addon-legacy-283223
	3551d61fcebe1       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                                   4 minutes ago       Running             kube-apiserver            0                   07f8f1cd10405       kube-apiserver-ingress-addon-legacy-283223
	d85138efe9a8b       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                                   4 minutes ago       Running             kube-controller-manager   0                   343f2e6f3e777       kube-controller-manager-ingress-addon-legacy-283223
	
	* 
	* ==> coredns [4da462cfecebb88bb7b0cec35c0fef391cfcb5b08f112064bd7671994e5231aa] <==
	* I1206 18:55:24.148464       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2023-12-06 18:54:54.147861771 +0000 UTC m=+0.035278853) (total time: 30.000522097s):
	Trace[2019727887]: [30.000522097s] [30.000522097s] END
	E1206 18:55:24.148516       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	I1206 18:55:24.150975       1 trace.go:116] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2023-12-06 18:54:54.148287396 +0000 UTC m=+0.035704483) (total time: 30.002662603s):
	Trace[1427131847]: [30.002662603s] [30.002662603s] END
	E1206 18:55:24.151008       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	I1206 18:55:24.151084       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2023-12-06 18:54:54.148557976 +0000 UTC m=+0.035975066) (total time: 30.002516937s):
	Trace[939984059]: [30.002516937s] [30.002516937s] END
	E1206 18:55:24.151087       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 6dca4351036a5cca7eefa7c93a3dea30
	CoreDNS-1.6.7
	linux/amd64, go1.13.6, da7f65b
	[INFO] 127.0.0.1:56290 - 31154 "HINFO IN 1595570917601905790.777789746160052772. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.028207601s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] 10.244.0.6:33521 - 36248 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000247002s
	[INFO] 10.244.0.6:33521 - 17659 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000095419s
	[INFO] 10.244.0.6:33521 - 12661 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000102503s
	[INFO] 10.244.0.6:33521 - 6701 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000107087s
	[INFO] 10.244.0.6:33521 - 64281 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000079142s
	[INFO] 10.244.0.6:33521 - 13786 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000097213s
	[INFO] 10.244.0.6:33521 - 48668 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000152423s
	
	* 
	* ==> coredns [caff21954192a38bfe83487d2641eff07ecab57a181cc67bdc9e72f68730a9f8] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] 10.244.0.6:48802 - 41600 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000735422s
	[INFO] 10.244.0.6:48802 - 18873 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000118111s
	[INFO] 10.244.0.6:48802 - 44108 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000186968s
	[INFO] 10.244.0.6:48802 - 9759 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000077113s
	[INFO] 10.244.0.6:48802 - 45058 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000133689s
	[INFO] 10.244.0.6:48802 - 52695 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000082864s
	[INFO] 10.244.0.6:48802 - 484 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000067209s
	[INFO] 10.244.0.6:55787 - 21470 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000100413s
	[INFO] 10.244.0.6:55787 - 33730 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000054913s
	[INFO] 10.244.0.6:37332 - 41036 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000228593s
	[INFO] 10.244.0.6:55787 - 8172 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000137155s
	[INFO] 10.244.0.6:37332 - 7226 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000030942s
	[INFO] 10.244.0.6:55787 - 52519 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000036945s
	[INFO] 10.244.0.6:55787 - 3290 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000032875s
	[INFO] 10.244.0.6:37332 - 61680 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000028516s
	[INFO] 10.244.0.6:55787 - 3119 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000029271s
	[INFO] 10.244.0.6:37332 - 54354 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000025924s
	[INFO] 10.244.0.6:55787 - 29574 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.0000476s
	[INFO] 10.244.0.6:37332 - 41354 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000027435s
	[INFO] 10.244.0.6:37332 - 38866 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000036075s
	[INFO] 10.244.0.6:37332 - 5964 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00003237s
	I1206 18:55:24.198168       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2023-12-06 18:54:54.197463946 +0000 UTC m=+0.037671198) (total time: 30.00068867s):
	Trace[939984059]: [30.00068867s] [30.00068867s] END
	E1206 18:55:24.198208       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-283223
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-283223
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=31a3600ce72029d920a55140bbc6d0705e357503
	                    minikube.k8s.io/name=ingress-addon-legacy-283223
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_06T18_54_36_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 06 Dec 2023 18:54:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-283223
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 06 Dec 2023 18:58:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 06 Dec 2023 18:56:06 +0000   Wed, 06 Dec 2023 18:54:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 06 Dec 2023 18:56:06 +0000   Wed, 06 Dec 2023 18:54:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 06 Dec 2023 18:56:06 +0000   Wed, 06 Dec 2023 18:54:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 06 Dec 2023 18:56:06 +0000   Wed, 06 Dec 2023 18:54:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.55
	  Hostname:    ingress-addon-legacy-283223
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012800Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012800Ki
	  pods:               110
	System Info:
	  Machine ID:                 fc716ba7b7eb40cd9573a8adca818c72
	  System UUID:                fc716ba7-b7eb-40cd-9573-a8adca818c72
	  Boot ID:                    51f108fd-67f5-4eca-90e7-7869d5c0de4a
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-p44s7                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m35s
	  kube-system                 coredns-66bff467f8-69h4q                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     3m44s
	  kube-system                 coredns-66bff467f8-pld8p                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     3m44s
	  kube-system                 etcd-ingress-addon-legacy-283223                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m59s
	  kube-system                 kube-apiserver-ingress-addon-legacy-283223             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m59s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-283223    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m59s
	  kube-system                 kube-proxy-bhb7c                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m44s
	  kube-system                 kube-scheduler-ingress-addon-legacy-283223             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m58s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             140Mi (3%!)(MISSING)  340Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  NodeHasSufficientMemory  4m9s (x5 over 4m9s)  kubelet     Node ingress-addon-legacy-283223 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m9s (x5 over 4m9s)  kubelet     Node ingress-addon-legacy-283223 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m9s (x4 over 4m9s)  kubelet     Node ingress-addon-legacy-283223 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m59s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m59s                kubelet     Node ingress-addon-legacy-283223 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m59s                kubelet     Node ingress-addon-legacy-283223 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m59s                kubelet     Node ingress-addon-legacy-283223 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m59s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m49s                kubelet     Node ingress-addon-legacy-283223 status is now: NodeReady
	  Normal  Starting                 3m41s                kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Dec 6 18:53] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.091686] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.414532] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Dec 6 18:54] systemd-fstab-generator[113]: Ignoring "noauto" for root device
	[  +0.133596] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.095314] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.763801] systemd-fstab-generator[648]: Ignoring "noauto" for root device
	[  +0.117322] systemd-fstab-generator[659]: Ignoring "noauto" for root device
	[  +0.144536] systemd-fstab-generator[672]: Ignoring "noauto" for root device
	[  +0.107950] systemd-fstab-generator[683]: Ignoring "noauto" for root device
	[  +0.202745] systemd-fstab-generator[707]: Ignoring "noauto" for root device
	[  +8.232094] systemd-fstab-generator[1034]: Ignoring "noauto" for root device
	[  +3.006316] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[  +9.248166] systemd-fstab-generator[1416]: Ignoring "noauto" for root device
	[ +17.721859] kauditd_printk_skb: 6 callbacks suppressed
	[Dec 6 18:55] kauditd_printk_skb: 16 callbacks suppressed
	[Dec 6 18:56] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.190520] kauditd_printk_skb: 3 callbacks suppressed
	[Dec 6 18:58] kauditd_printk_skb: 1 callbacks suppressed
	
	* 
	* ==> etcd [c9928ab6457c858a258bf2925993b7e58f1af25c13b0e092771f0aa85a8850b9] <==
	* raft2023/12/06 18:54:29 INFO: ec8263ef63f6a581 switched to configuration voters=(17042293819748820353)
	2023-12-06 18:54:29.681060 W | auth: simple token is not cryptographically signed
	2023-12-06 18:54:29.685240 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-12-06 18:54:29.688563 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-12-06 18:54:29.689663 I | embed: listening for peers on 192.168.39.55:2380
	2023-12-06 18:54:29.689778 I | etcdserver: ec8263ef63f6a581 as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/12/06 18:54:29 INFO: ec8263ef63f6a581 switched to configuration voters=(17042293819748820353)
	2023-12-06 18:54:29.689993 I | etcdserver/membership: added member ec8263ef63f6a581 [https://192.168.39.55:2380] to cluster 6efde86ab6af376b
	2023-12-06 18:54:29.690369 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2023/12/06 18:54:29 INFO: ec8263ef63f6a581 is starting a new election at term 1
	raft2023/12/06 18:54:29 INFO: ec8263ef63f6a581 became candidate at term 2
	raft2023/12/06 18:54:29 INFO: ec8263ef63f6a581 received MsgVoteResp from ec8263ef63f6a581 at term 2
	raft2023/12/06 18:54:29 INFO: ec8263ef63f6a581 became leader at term 2
	raft2023/12/06 18:54:29 INFO: raft.node: ec8263ef63f6a581 elected leader ec8263ef63f6a581 at term 2
	2023-12-06 18:54:29.875069 I | etcdserver: published {Name:ingress-addon-legacy-283223 ClientURLs:[https://192.168.39.55:2379]} to cluster 6efde86ab6af376b
	2023-12-06 18:54:29.875183 I | embed: ready to serve client requests
	2023-12-06 18:54:29.879021 I | embed: serving client requests on 127.0.0.1:2379
	2023-12-06 18:54:29.879246 I | etcdserver: setting up the initial cluster version to 3.4
	2023-12-06 18:54:29.879525 I | embed: ready to serve client requests
	2023-12-06 18:54:29.880652 I | embed: serving client requests on 192.168.39.55:2379
	2023-12-06 18:54:29.881789 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-12-06 18:54:29.882029 I | etcdserver/api: enabled capabilities for version 3.4
	2023-12-06 18:54:51.779604 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:1 size:173" took too long (160.976946ms) to execute
	2023-12-06 18:55:46.557241 W | etcdserver: read-only range request "key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" " with result "range_response_count:3 size:13723" took too long (196.9575ms) to execute
	2023-12-06 18:55:56.852214 W | etcdserver: read-only range request "key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:500 " with result "range_response_count:2 size:8316" took too long (212.388172ms) to execute
	
	* 
	* ==> kernel <==
	*  18:58:35 up 4 min,  0 users,  load average: 0.37, 0.40, 0.19
	Linux ingress-addon-legacy-283223 5.10.57 #1 SMP Fri Dec 1 04:24:04 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [3551d61fcebe1ee1e04473d43b674e04d5b885066dafa093ee49b4922af6ac15] <==
	* I1206 18:54:33.006081       1 shared_informer.go:223] Waiting for caches to sync for crd-autoregister
	E1206 18:54:33.049962       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.39.55, ResourceVersion: 0, AdditionalErrorMsg: 
	I1206 18:54:33.070064       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I1206 18:54:33.071463       1 cache.go:39] Caches are synced for autoregister controller
	I1206 18:54:33.071473       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1206 18:54:33.071481       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1206 18:54:33.106375       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I1206 18:54:33.963649       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1206 18:54:33.963728       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1206 18:54:33.971747       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I1206 18:54:33.976931       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I1206 18:54:33.976968       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I1206 18:54:34.463093       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1206 18:54:34.513540       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1206 18:54:34.580268       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.39.55]
	I1206 18:54:34.581190       1 controller.go:609] quota admission added evaluator for: endpoints
	I1206 18:54:34.586990       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1206 18:54:35.329313       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I1206 18:54:36.193343       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I1206 18:54:36.279555       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I1206 18:54:36.673034       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I1206 18:54:51.579645       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I1206 18:54:51.581799       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I1206 18:55:37.627158       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I1206 18:56:00.653269       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	* 
	* ==> kube-controller-manager [d85138efe9a8b4a763dec02fe3d3c2672ff934f979e36ab01130f253ac2c254f] <==
	* I1206 18:54:51.626070       1 shared_informer.go:230] Caches are synced for stateful set 
	I1206 18:54:51.627961       1 shared_informer.go:230] Caches are synced for endpoint_slice 
	I1206 18:54:51.632339       1 shared_informer.go:230] Caches are synced for endpoint 
	I1206 18:54:51.653664       1 shared_informer.go:230] Caches are synced for GC 
	I1206 18:54:51.659805       1 shared_informer.go:230] Caches are synced for resource quota 
	I1206 18:54:51.708362       1 shared_informer.go:230] Caches are synced for resource quota 
	I1206 18:54:51.726429       1 shared_informer.go:230] Caches are synced for service account 
	I1206 18:54:51.775481       1 shared_informer.go:230] Caches are synced for attach detach 
	I1206 18:54:51.786700       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"ef96ff7e-a49d-4288-a0ba-5bfdf7bfb76a", APIVersion:"apps/v1", ResourceVersion:"201", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 2
	I1206 18:54:51.786951       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"2e2d3be9-3521-4857-9c4d-8fdf20ca6aed", APIVersion:"apps/v1", ResourceVersion:"209", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-bhb7c
	I1206 18:54:51.789779       1 shared_informer.go:230] Caches are synced for namespace 
	I1206 18:54:51.797172       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"19d7fc98-b93e-482e-b4c6-deb3b174a10d", APIVersion:"apps/v1", ResourceVersion:"315", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-pld8p
	I1206 18:54:51.859609       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"19d7fc98-b93e-482e-b4c6-deb3b174a10d", APIVersion:"apps/v1", ResourceVersion:"315", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-69h4q
	I1206 18:54:51.895476       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1206 18:54:51.895515       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1206 18:54:51.919235       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1206 18:55:37.609501       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"e96a072a-32b7-4a3e-ab82-fae660d9d5c5", APIVersion:"apps/v1", ResourceVersion:"465", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I1206 18:55:37.631071       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"ab557582-06b8-4b8e-927a-ae0f1a412ec2", APIVersion:"apps/v1", ResourceVersion:"466", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-czpzk
	I1206 18:55:37.707430       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"64cad780-a0ee-45da-9dde-d2189873ff38", APIVersion:"batch/v1", ResourceVersion:"470", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-7vm9p
	I1206 18:55:37.790695       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"3a60606f-f3fa-4e1e-a0b5-2cfaad07c84e", APIVersion:"batch/v1", ResourceVersion:"484", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-tgbbr
	I1206 18:55:41.154138       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"64cad780-a0ee-45da-9dde-d2189873ff38", APIVersion:"batch/v1", ResourceVersion:"483", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1206 18:55:42.177277       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"3a60606f-f3fa-4e1e-a0b5-2cfaad07c84e", APIVersion:"batch/v1", ResourceVersion:"490", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1206 18:58:21.623554       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"495316b3-afeb-4d0f-985f-079b4fd468ba", APIVersion:"apps/v1", ResourceVersion:"680", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I1206 18:58:21.648153       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"56ab0186-af68-4f1a-a1aa-b3bf25502382", APIVersion:"apps/v1", ResourceVersion:"681", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-p44s7
	E1206 18:58:32.369219       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-rnknp" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	* 
	* ==> kube-proxy [8f3d2fb747e2e455d3519b34a6e6a7b0b6a006e435397e873403aa2773485b57] <==
	* W1206 18:54:54.568276       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I1206 18:54:54.580100       1 node.go:136] Successfully retrieved node IP: 192.168.39.55
	I1206 18:54:54.580286       1 server_others.go:186] Using iptables Proxier.
	I1206 18:54:54.580795       1 server.go:583] Version: v1.18.20
	I1206 18:54:54.583033       1 config.go:315] Starting service config controller
	I1206 18:54:54.583065       1 shared_informer.go:223] Waiting for caches to sync for service config
	I1206 18:54:54.583080       1 config.go:133] Starting endpoints config controller
	I1206 18:54:54.583087       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I1206 18:54:54.683326       1 shared_informer.go:230] Caches are synced for endpoints config 
	I1206 18:54:54.683326       1 shared_informer.go:230] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [8aaa50e09f902643ea2d7c8e749f27794f5c2db614cda1a42264c12fdced9698] <==
	* I1206 18:54:33.076021       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I1206 18:54:33.076287       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1206 18:54:33.076295       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1206 18:54:33.076305       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1206 18:54:33.086775       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1206 18:54:33.087002       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1206 18:54:33.087107       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1206 18:54:33.087187       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1206 18:54:33.087267       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1206 18:54:33.087342       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1206 18:54:33.087425       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1206 18:54:33.087489       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1206 18:54:33.087552       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1206 18:54:33.087619       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1206 18:54:33.087687       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1206 18:54:33.088888       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1206 18:54:33.924552       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1206 18:54:33.981332       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1206 18:54:34.021485       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1206 18:54:34.034789       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1206 18:54:34.089420       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1206 18:54:34.089924       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1206 18:54:34.103062       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1206 18:54:34.268261       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I1206 18:54:36.176515       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-12-06 18:54:02 UTC, ends at Wed 2023-12-06 18:58:35 UTC. --
	Dec 06 18:55:43 ingress-addon-legacy-283223 kubelet[1423]: W1206 18:55:43.342508    1423 pod_container_deletor.go:77] Container "298f2981c397faba3b9b7fa8b7b7d667e48961078213380460e97fc33365835d" not found in pod's containers
	Dec 06 18:55:43 ingress-addon-legacy-283223 kubelet[1423]: I1206 18:55:43.373805    1423 reconciler.go:319] Volume detached for volume "ingress-nginx-admission-token-7bcb6" (UniqueName: "kubernetes.io/secret/4c98dfce-9575-4f43-bc30-f7480cc118e7-ingress-nginx-admission-token-7bcb6") on node "ingress-addon-legacy-283223" DevicePath ""
	Dec 06 18:55:51 ingress-addon-legacy-283223 kubelet[1423]: I1206 18:55:51.512928    1423 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Dec 06 18:55:51 ingress-addon-legacy-283223 kubelet[1423]: E1206 18:55:51.514777    1423 reflector.go:178] object-"kube-system"/"minikube-ingress-dns-token-62hlq": Failed to list *v1.Secret: secrets "minikube-ingress-dns-token-62hlq" is forbidden: User "system:node:ingress-addon-legacy-283223" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "ingress-addon-legacy-283223" and this object
	Dec 06 18:55:51 ingress-addon-legacy-283223 kubelet[1423]: I1206 18:55:51.600678    1423 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "minikube-ingress-dns-token-62hlq" (UniqueName: "kubernetes.io/secret/d8b933f5-9840-4f1f-a2c1-0ae90e0fe00c-minikube-ingress-dns-token-62hlq") pod "kube-ingress-dns-minikube" (UID: "d8b933f5-9840-4f1f-a2c1-0ae90e0fe00c")
	Dec 06 18:56:00 ingress-addon-legacy-283223 kubelet[1423]: I1206 18:56:00.832582    1423 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Dec 06 18:56:00 ingress-addon-legacy-283223 kubelet[1423]: I1206 18:56:00.930000    1423 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-75bxb" (UniqueName: "kubernetes.io/secret/9e94db7c-b3d5-43a7-87d6-7d33def921e1-default-token-75bxb") pod "nginx" (UID: "9e94db7c-b3d5-43a7-87d6-7d33def921e1")
	Dec 06 18:58:21 ingress-addon-legacy-283223 kubelet[1423]: I1206 18:58:21.658156    1423 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Dec 06 18:58:21 ingress-addon-legacy-283223 kubelet[1423]: I1206 18:58:21.816110    1423 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-75bxb" (UniqueName: "kubernetes.io/secret/4fe7590e-a43a-423a-ba26-23786392b795-default-token-75bxb") pod "hello-world-app-5f5d8b66bb-p44s7" (UID: "4fe7590e-a43a-423a-ba26-23786392b795")
	Dec 06 18:58:23 ingress-addon-legacy-283223 kubelet[1423]: I1206 18:58:23.315466    1423 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: e8561c3cd9722fac458257d707dfdcf1ea0b1341615ffcd74b4e6a116eaae941
	Dec 06 18:58:23 ingress-addon-legacy-283223 kubelet[1423]: I1206 18:58:23.423006    1423 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-62hlq" (UniqueName: "kubernetes.io/secret/d8b933f5-9840-4f1f-a2c1-0ae90e0fe00c-minikube-ingress-dns-token-62hlq") pod "d8b933f5-9840-4f1f-a2c1-0ae90e0fe00c" (UID: "d8b933f5-9840-4f1f-a2c1-0ae90e0fe00c")
	Dec 06 18:58:23 ingress-addon-legacy-283223 kubelet[1423]: I1206 18:58:23.429617    1423 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8b933f5-9840-4f1f-a2c1-0ae90e0fe00c-minikube-ingress-dns-token-62hlq" (OuterVolumeSpecName: "minikube-ingress-dns-token-62hlq") pod "d8b933f5-9840-4f1f-a2c1-0ae90e0fe00c" (UID: "d8b933f5-9840-4f1f-a2c1-0ae90e0fe00c"). InnerVolumeSpecName "minikube-ingress-dns-token-62hlq". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 06 18:58:23 ingress-addon-legacy-283223 kubelet[1423]: I1206 18:58:23.523469    1423 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-62hlq" (UniqueName: "kubernetes.io/secret/d8b933f5-9840-4f1f-a2c1-0ae90e0fe00c-minikube-ingress-dns-token-62hlq") on node "ingress-addon-legacy-283223" DevicePath ""
	Dec 06 18:58:23 ingress-addon-legacy-283223 kubelet[1423]: I1206 18:58:23.732370    1423 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: e8561c3cd9722fac458257d707dfdcf1ea0b1341615ffcd74b4e6a116eaae941
	Dec 06 18:58:23 ingress-addon-legacy-283223 kubelet[1423]: E1206 18:58:23.734518    1423 remote_runtime.go:295] ContainerStatus "e8561c3cd9722fac458257d707dfdcf1ea0b1341615ffcd74b4e6a116eaae941" from runtime service failed: rpc error: code = NotFound desc = could not find container "e8561c3cd9722fac458257d707dfdcf1ea0b1341615ffcd74b4e6a116eaae941": container with ID starting with e8561c3cd9722fac458257d707dfdcf1ea0b1341615ffcd74b4e6a116eaae941 not found: ID does not exist
	Dec 06 18:58:27 ingress-addon-legacy-283223 kubelet[1423]: E1206 18:58:27.754252    1423 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-czpzk.179e532062a1f838", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-czpzk", UID:"3fc3b8a3-6551-4fe2-9684-3d08e098f28d", APIVersion:"v1", ResourceVersion:"471", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-283223"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1544ef4ecca7a38, ext:231601711781, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1544ef4ecca7a38, ext:231601711781, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-czpzk.179e532062a1f838" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Dec 06 18:58:27 ingress-addon-legacy-283223 kubelet[1423]: E1206 18:58:27.788948    1423 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-czpzk.179e532062a1f838", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-czpzk", UID:"3fc3b8a3-6551-4fe2-9684-3d08e098f28d", APIVersion:"v1", ResourceVersion:"471", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-283223"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1544ef4ecca7a38, ext:231601711781, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1544ef4eeacdc9a, ext:231633325319, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-czpzk.179e532062a1f838" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Dec 06 18:58:30 ingress-addon-legacy-283223 kubelet[1423]: W1206 18:58:30.350657    1423 pod_container_deletor.go:77] Container "77318ae1b8dd9b1c497ac204b74e0eaa81f7c9b093e09401b5e398bf1f193656" not found in pod's containers
	Dec 06 18:58:31 ingress-addon-legacy-283223 kubelet[1423]: I1206 18:58:31.950559    1423 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/3fc3b8a3-6551-4fe2-9684-3d08e098f28d-webhook-cert") pod "3fc3b8a3-6551-4fe2-9684-3d08e098f28d" (UID: "3fc3b8a3-6551-4fe2-9684-3d08e098f28d")
	Dec 06 18:58:31 ingress-addon-legacy-283223 kubelet[1423]: I1206 18:58:31.950643    1423 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-mflvw" (UniqueName: "kubernetes.io/secret/3fc3b8a3-6551-4fe2-9684-3d08e098f28d-ingress-nginx-token-mflvw") pod "3fc3b8a3-6551-4fe2-9684-3d08e098f28d" (UID: "3fc3b8a3-6551-4fe2-9684-3d08e098f28d")
	Dec 06 18:58:31 ingress-addon-legacy-283223 kubelet[1423]: I1206 18:58:31.953141    1423 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fc3b8a3-6551-4fe2-9684-3d08e098f28d-ingress-nginx-token-mflvw" (OuterVolumeSpecName: "ingress-nginx-token-mflvw") pod "3fc3b8a3-6551-4fe2-9684-3d08e098f28d" (UID: "3fc3b8a3-6551-4fe2-9684-3d08e098f28d"). InnerVolumeSpecName "ingress-nginx-token-mflvw". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 06 18:58:31 ingress-addon-legacy-283223 kubelet[1423]: I1206 18:58:31.954875    1423 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fc3b8a3-6551-4fe2-9684-3d08e098f28d-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "3fc3b8a3-6551-4fe2-9684-3d08e098f28d" (UID: "3fc3b8a3-6551-4fe2-9684-3d08e098f28d"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 06 18:58:32 ingress-addon-legacy-283223 kubelet[1423]: I1206 18:58:32.050979    1423 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/3fc3b8a3-6551-4fe2-9684-3d08e098f28d-webhook-cert") on node "ingress-addon-legacy-283223" DevicePath ""
	Dec 06 18:58:32 ingress-addon-legacy-283223 kubelet[1423]: I1206 18:58:32.051013    1423 reconciler.go:319] Volume detached for volume "ingress-nginx-token-mflvw" (UniqueName: "kubernetes.io/secret/3fc3b8a3-6551-4fe2-9684-3d08e098f28d-ingress-nginx-token-mflvw") on node "ingress-addon-legacy-283223" DevicePath ""
	Dec 06 18:58:32 ingress-addon-legacy-283223 kubelet[1423]: W1206 18:58:32.804365    1423 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/3fc3b8a3-6551-4fe2-9684-3d08e098f28d/volumes" does not exist
	
	* 
	* ==> storage-provisioner [932eedac8f0dc19c2a57b67fe003b9d700d0b5fa8719cf6fa009049e0b14387b] <==
	* I1206 18:54:53.414996       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1206 18:55:23.419021       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	* 
	* ==> storage-provisioner [ac85f652cd581149e21e2284b8aa4b069326bd17a194b2455b383e875a698c06] <==
	* I1206 18:55:24.153717       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1206 18:55:24.168998       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1206 18:55:24.169068       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1206 18:55:24.176431       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1206 18:55:24.176658       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-283223_45001cc5-1a14-4d74-98bf-dee4853d5b58!
	I1206 18:55:24.178010       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1f7cda69-a4bd-4757-9e6d-55b310184197", APIVersion:"v1", ResourceVersion:"408", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-283223_45001cc5-1a14-4d74-98bf-dee4853d5b58 became leader
	I1206 18:55:24.277657       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-283223_45001cc5-1a14-4d74-98bf-dee4853d5b58!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-283223 -n ingress-addon-legacy-283223
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-283223 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (164.89s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-593099 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-593099 -- exec busybox-5bc68d56bd-shdgj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-593099 -- exec busybox-5bc68d56bd-shdgj -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-593099 -- exec busybox-5bc68d56bd-shdgj -- sh -c "ping -c 1 192.168.39.1": exit status 1 (201.130499ms)

                                                
                                                
-- stdout --
	PING 192.168.39.1 (192.168.39.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.39.1) from pod (busybox-5bc68d56bd-shdgj): exit status 1
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-593099 -- exec busybox-5bc68d56bd-x24l4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-593099 -- exec busybox-5bc68d56bd-x24l4 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-593099 -- exec busybox-5bc68d56bd-x24l4 -- sh -c "ping -c 1 192.168.39.1": exit status 1 (195.494805ms)

                                                
                                                
-- stdout --
	PING 192.168.39.1 (192.168.39.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.39.1) from pod (busybox-5bc68d56bd-x24l4): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-593099 -n multinode-593099
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-593099 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-593099 logs -n 25: (1.366324192s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | mount-start-2-112283 ssh -- ls                    | mount-start-2-112283 | jenkins | v1.32.0 | 06 Dec 23 19:02 UTC | 06 Dec 23 19:02 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-112283 ssh --                       | mount-start-2-112283 | jenkins | v1.32.0 | 06 Dec 23 19:02 UTC | 06 Dec 23 19:02 UTC |
	|         | mount | grep 9p                                   |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-112283                           | mount-start-2-112283 | jenkins | v1.32.0 | 06 Dec 23 19:02 UTC | 06 Dec 23 19:02 UTC |
	| start   | -p mount-start-2-112283                           | mount-start-2-112283 | jenkins | v1.32.0 | 06 Dec 23 19:02 UTC | 06 Dec 23 19:02 UTC |
	| mount   | /home/jenkins:/minikube-host                      | mount-start-2-112283 | jenkins | v1.32.0 | 06 Dec 23 19:02 UTC |                     |
	|         | --profile mount-start-2-112283                    |                      |         |         |                     |                     |
	|         | --v 0 --9p-version 9p2000.L                       |                      |         |         |                     |                     |
	|         | --gid 0 --ip  --msize 6543                        |                      |         |         |                     |                     |
	|         | --port 46465 --type 9p --uid 0                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-112283 ssh -- ls                    | mount-start-2-112283 | jenkins | v1.32.0 | 06 Dec 23 19:02 UTC | 06 Dec 23 19:02 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-112283 ssh --                       | mount-start-2-112283 | jenkins | v1.32.0 | 06 Dec 23 19:02 UTC | 06 Dec 23 19:02 UTC |
	|         | mount | grep 9p                                   |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-112283                           | mount-start-2-112283 | jenkins | v1.32.0 | 06 Dec 23 19:02 UTC | 06 Dec 23 19:02 UTC |
	| delete  | -p mount-start-1-090770                           | mount-start-1-090770 | jenkins | v1.32.0 | 06 Dec 23 19:02 UTC | 06 Dec 23 19:02 UTC |
	| start   | -p multinode-593099                               | multinode-593099     | jenkins | v1.32.0 | 06 Dec 23 19:02 UTC | 06 Dec 23 19:04 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=kvm2                                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-593099 -- apply -f                   | multinode-593099     | jenkins | v1.32.0 | 06 Dec 23 19:04 UTC | 06 Dec 23 19:04 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-593099 -- rollout                    | multinode-593099     | jenkins | v1.32.0 | 06 Dec 23 19:04 UTC | 06 Dec 23 19:04 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-593099 -- get pods -o                | multinode-593099     | jenkins | v1.32.0 | 06 Dec 23 19:04 UTC | 06 Dec 23 19:04 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-593099 -- get pods -o                | multinode-593099     | jenkins | v1.32.0 | 06 Dec 23 19:04 UTC | 06 Dec 23 19:04 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-593099 -- exec                       | multinode-593099     | jenkins | v1.32.0 | 06 Dec 23 19:04 UTC | 06 Dec 23 19:04 UTC |
	|         | busybox-5bc68d56bd-shdgj --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-593099 -- exec                       | multinode-593099     | jenkins | v1.32.0 | 06 Dec 23 19:04 UTC | 06 Dec 23 19:04 UTC |
	|         | busybox-5bc68d56bd-x24l4 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-593099 -- exec                       | multinode-593099     | jenkins | v1.32.0 | 06 Dec 23 19:04 UTC | 06 Dec 23 19:04 UTC |
	|         | busybox-5bc68d56bd-shdgj --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-593099 -- exec                       | multinode-593099     | jenkins | v1.32.0 | 06 Dec 23 19:04 UTC | 06 Dec 23 19:04 UTC |
	|         | busybox-5bc68d56bd-x24l4 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-593099 -- exec                       | multinode-593099     | jenkins | v1.32.0 | 06 Dec 23 19:04 UTC | 06 Dec 23 19:04 UTC |
	|         | busybox-5bc68d56bd-shdgj -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-593099 -- exec                       | multinode-593099     | jenkins | v1.32.0 | 06 Dec 23 19:04 UTC | 06 Dec 23 19:04 UTC |
	|         | busybox-5bc68d56bd-x24l4 -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-593099 -- get pods -o                | multinode-593099     | jenkins | v1.32.0 | 06 Dec 23 19:04 UTC | 06 Dec 23 19:04 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-593099 -- exec                       | multinode-593099     | jenkins | v1.32.0 | 06 Dec 23 19:04 UTC | 06 Dec 23 19:04 UTC |
	|         | busybox-5bc68d56bd-shdgj                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-593099 -- exec                       | multinode-593099     | jenkins | v1.32.0 | 06 Dec 23 19:04 UTC |                     |
	|         | busybox-5bc68d56bd-shdgj -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-593099 -- exec                       | multinode-593099     | jenkins | v1.32.0 | 06 Dec 23 19:04 UTC | 06 Dec 23 19:04 UTC |
	|         | busybox-5bc68d56bd-x24l4                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-593099 -- exec                       | multinode-593099     | jenkins | v1.32.0 | 06 Dec 23 19:04 UTC |                     |
	|         | busybox-5bc68d56bd-x24l4 -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/06 19:02:46
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 19:02:46.061011   83344 out.go:296] Setting OutFile to fd 1 ...
	I1206 19:02:46.061281   83344 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 19:02:46.061289   83344 out.go:309] Setting ErrFile to fd 2...
	I1206 19:02:46.061294   83344 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 19:02:46.061498   83344 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17740-63652/.minikube/bin
	I1206 19:02:46.062101   83344 out.go:303] Setting JSON to false
	I1206 19:02:46.062967   83344 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":6316,"bootTime":1701883050,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 19:02:46.063029   83344 start.go:138] virtualization: kvm guest
	I1206 19:02:46.065651   83344 out.go:177] * [multinode-593099] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1206 19:02:46.067514   83344 out.go:177]   - MINIKUBE_LOCATION=17740
	I1206 19:02:46.067528   83344 notify.go:220] Checking for updates...
	I1206 19:02:46.069074   83344 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 19:02:46.070552   83344 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17740-63652/kubeconfig
	I1206 19:02:46.072441   83344 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17740-63652/.minikube
	I1206 19:02:46.074205   83344 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 19:02:46.075876   83344 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 19:02:46.077619   83344 driver.go:392] Setting default libvirt URI to qemu:///system
	I1206 19:02:46.113963   83344 out.go:177] * Using the kvm2 driver based on user configuration
	I1206 19:02:46.115435   83344 start.go:298] selected driver: kvm2
	I1206 19:02:46.115450   83344 start.go:902] validating driver "kvm2" against <nil>
	I1206 19:02:46.115461   83344 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 19:02:46.116217   83344 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 19:02:46.116316   83344 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17740-63652/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1206 19:02:46.131501   83344 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1206 19:02:46.131605   83344 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1206 19:02:46.131857   83344 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 19:02:46.131918   83344 cni.go:84] Creating CNI manager for ""
	I1206 19:02:46.131934   83344 cni.go:136] 0 nodes found, recommending kindnet
	I1206 19:02:46.131944   83344 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1206 19:02:46.131957   83344 start_flags.go:323] config:
	{Name:multinode-593099 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-593099 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 19:02:46.132166   83344 iso.go:125] acquiring lock: {Name:mk6e9c7dc90243dab7d2a6f322b4b6abe4dff6ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 19:02:46.135723   83344 out.go:177] * Starting control plane node multinode-593099 in cluster multinode-593099
	I1206 19:02:46.137712   83344 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1206 19:02:46.137821   83344 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1206 19:02:46.137837   83344 cache.go:56] Caching tarball of preloaded images
	I1206 19:02:46.138030   83344 preload.go:174] Found /home/jenkins/minikube-integration/17740-63652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1206 19:02:46.138054   83344 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1206 19:02:46.138817   83344 profile.go:148] Saving config to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/config.json ...
	I1206 19:02:46.138869   83344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/config.json: {Name:mkac0b77ec00bdd57267a303d415eff308dc4810 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 19:02:46.139255   83344 start.go:365] acquiring machines lock for multinode-593099: {Name:mk49ce640266d8c664a871ed4989f65c26b6fae1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1206 19:02:46.139335   83344 start.go:369] acquired machines lock for "multinode-593099" in 36.064µs
	I1206 19:02:46.139375   83344 start.go:93] Provisioning new machine with config: &{Name:multinode-593099 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.4 ClusterName:multinode-593099 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 19:02:46.139528   83344 start.go:125] createHost starting for "" (driver="kvm2")
	I1206 19:02:46.141455   83344 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1206 19:02:46.141643   83344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:02:46.141701   83344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:02:46.155955   83344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35821
	I1206 19:02:46.156376   83344 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:02:46.156866   83344 main.go:141] libmachine: Using API Version  1
	I1206 19:02:46.156887   83344 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:02:46.157250   83344 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:02:46.157423   83344 main.go:141] libmachine: (multinode-593099) Calling .GetMachineName
	I1206 19:02:46.157595   83344 main.go:141] libmachine: (multinode-593099) Calling .DriverName
	I1206 19:02:46.157792   83344 start.go:159] libmachine.API.Create for "multinode-593099" (driver="kvm2")
	I1206 19:02:46.157847   83344 client.go:168] LocalClient.Create starting
	I1206 19:02:46.157892   83344 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem
	I1206 19:02:46.157930   83344 main.go:141] libmachine: Decoding PEM data...
	I1206 19:02:46.157949   83344 main.go:141] libmachine: Parsing certificate...
	I1206 19:02:46.158001   83344 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem
	I1206 19:02:46.158023   83344 main.go:141] libmachine: Decoding PEM data...
	I1206 19:02:46.158038   83344 main.go:141] libmachine: Parsing certificate...
	I1206 19:02:46.158054   83344 main.go:141] libmachine: Running pre-create checks...
	I1206 19:02:46.158064   83344 main.go:141] libmachine: (multinode-593099) Calling .PreCreateCheck
	I1206 19:02:46.158358   83344 main.go:141] libmachine: (multinode-593099) Calling .GetConfigRaw
	I1206 19:02:46.158797   83344 main.go:141] libmachine: Creating machine...
	I1206 19:02:46.158812   83344 main.go:141] libmachine: (multinode-593099) Calling .Create
	I1206 19:02:46.158931   83344 main.go:141] libmachine: (multinode-593099) Creating KVM machine...
	I1206 19:02:46.160213   83344 main.go:141] libmachine: (multinode-593099) DBG | found existing default KVM network
	I1206 19:02:46.160866   83344 main.go:141] libmachine: (multinode-593099) DBG | I1206 19:02:46.160697   83367 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015a40}
	I1206 19:02:46.166607   83344 main.go:141] libmachine: (multinode-593099) DBG | trying to create private KVM network mk-multinode-593099 192.168.39.0/24...
	I1206 19:02:46.240604   83344 main.go:141] libmachine: (multinode-593099) DBG | private KVM network mk-multinode-593099 192.168.39.0/24 created
	I1206 19:02:46.240648   83344 main.go:141] libmachine: (multinode-593099) DBG | I1206 19:02:46.240553   83367 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17740-63652/.minikube
	I1206 19:02:46.240685   83344 main.go:141] libmachine: (multinode-593099) Setting up store path in /home/jenkins/minikube-integration/17740-63652/.minikube/machines/multinode-593099 ...
	I1206 19:02:46.240714   83344 main.go:141] libmachine: (multinode-593099) Building disk image from file:///home/jenkins/minikube-integration/17740-63652/.minikube/cache/iso/amd64/minikube-v1.32.1-1701387192-17703-amd64.iso
	I1206 19:02:46.240740   83344 main.go:141] libmachine: (multinode-593099) Downloading /home/jenkins/minikube-integration/17740-63652/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17740-63652/.minikube/cache/iso/amd64/minikube-v1.32.1-1701387192-17703-amd64.iso...
	I1206 19:02:46.457523   83344 main.go:141] libmachine: (multinode-593099) DBG | I1206 19:02:46.457383   83367 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/multinode-593099/id_rsa...
	I1206 19:02:46.648594   83344 main.go:141] libmachine: (multinode-593099) DBG | I1206 19:02:46.648419   83367 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/multinode-593099/multinode-593099.rawdisk...
	I1206 19:02:46.648627   83344 main.go:141] libmachine: (multinode-593099) DBG | Writing magic tar header
	I1206 19:02:46.648643   83344 main.go:141] libmachine: (multinode-593099) DBG | Writing SSH key tar header
	I1206 19:02:46.648656   83344 main.go:141] libmachine: (multinode-593099) DBG | I1206 19:02:46.648559   83367 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17740-63652/.minikube/machines/multinode-593099 ...
	I1206 19:02:46.648683   83344 main.go:141] libmachine: (multinode-593099) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/multinode-593099
	I1206 19:02:46.648711   83344 main.go:141] libmachine: (multinode-593099) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17740-63652/.minikube/machines
	I1206 19:02:46.648772   83344 main.go:141] libmachine: (multinode-593099) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17740-63652/.minikube
	I1206 19:02:46.648789   83344 main.go:141] libmachine: (multinode-593099) Setting executable bit set on /home/jenkins/minikube-integration/17740-63652/.minikube/machines/multinode-593099 (perms=drwx------)
	I1206 19:02:46.648803   83344 main.go:141] libmachine: (multinode-593099) Setting executable bit set on /home/jenkins/minikube-integration/17740-63652/.minikube/machines (perms=drwxr-xr-x)
	I1206 19:02:46.648813   83344 main.go:141] libmachine: (multinode-593099) Setting executable bit set on /home/jenkins/minikube-integration/17740-63652/.minikube (perms=drwxr-xr-x)
	I1206 19:02:46.648822   83344 main.go:141] libmachine: (multinode-593099) Setting executable bit set on /home/jenkins/minikube-integration/17740-63652 (perms=drwxrwxr-x)
	I1206 19:02:46.648836   83344 main.go:141] libmachine: (multinode-593099) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1206 19:02:46.648849   83344 main.go:141] libmachine: (multinode-593099) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1206 19:02:46.648860   83344 main.go:141] libmachine: (multinode-593099) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17740-63652
	I1206 19:02:46.648885   83344 main.go:141] libmachine: (multinode-593099) Creating domain...
	I1206 19:02:46.648931   83344 main.go:141] libmachine: (multinode-593099) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1206 19:02:46.648966   83344 main.go:141] libmachine: (multinode-593099) DBG | Checking permissions on dir: /home/jenkins
	I1206 19:02:46.648996   83344 main.go:141] libmachine: (multinode-593099) DBG | Checking permissions on dir: /home
	I1206 19:02:46.649018   83344 main.go:141] libmachine: (multinode-593099) DBG | Skipping /home - not owner
	I1206 19:02:46.650099   83344 main.go:141] libmachine: (multinode-593099) define libvirt domain using xml: 
	I1206 19:02:46.650118   83344 main.go:141] libmachine: (multinode-593099) <domain type='kvm'>
	I1206 19:02:46.650126   83344 main.go:141] libmachine: (multinode-593099)   <name>multinode-593099</name>
	I1206 19:02:46.650131   83344 main.go:141] libmachine: (multinode-593099)   <memory unit='MiB'>2200</memory>
	I1206 19:02:46.650137   83344 main.go:141] libmachine: (multinode-593099)   <vcpu>2</vcpu>
	I1206 19:02:46.650149   83344 main.go:141] libmachine: (multinode-593099)   <features>
	I1206 19:02:46.650166   83344 main.go:141] libmachine: (multinode-593099)     <acpi/>
	I1206 19:02:46.650171   83344 main.go:141] libmachine: (multinode-593099)     <apic/>
	I1206 19:02:46.650176   83344 main.go:141] libmachine: (multinode-593099)     <pae/>
	I1206 19:02:46.650181   83344 main.go:141] libmachine: (multinode-593099)     
	I1206 19:02:46.650187   83344 main.go:141] libmachine: (multinode-593099)   </features>
	I1206 19:02:46.650197   83344 main.go:141] libmachine: (multinode-593099)   <cpu mode='host-passthrough'>
	I1206 19:02:46.650206   83344 main.go:141] libmachine: (multinode-593099)   
	I1206 19:02:46.650211   83344 main.go:141] libmachine: (multinode-593099)   </cpu>
	I1206 19:02:46.650219   83344 main.go:141] libmachine: (multinode-593099)   <os>
	I1206 19:02:46.650234   83344 main.go:141] libmachine: (multinode-593099)     <type>hvm</type>
	I1206 19:02:46.650247   83344 main.go:141] libmachine: (multinode-593099)     <boot dev='cdrom'/>
	I1206 19:02:46.650252   83344 main.go:141] libmachine: (multinode-593099)     <boot dev='hd'/>
	I1206 19:02:46.650261   83344 main.go:141] libmachine: (multinode-593099)     <bootmenu enable='no'/>
	I1206 19:02:46.650266   83344 main.go:141] libmachine: (multinode-593099)   </os>
	I1206 19:02:46.650274   83344 main.go:141] libmachine: (multinode-593099)   <devices>
	I1206 19:02:46.650280   83344 main.go:141] libmachine: (multinode-593099)     <disk type='file' device='cdrom'>
	I1206 19:02:46.650291   83344 main.go:141] libmachine: (multinode-593099)       <source file='/home/jenkins/minikube-integration/17740-63652/.minikube/machines/multinode-593099/boot2docker.iso'/>
	I1206 19:02:46.650300   83344 main.go:141] libmachine: (multinode-593099)       <target dev='hdc' bus='scsi'/>
	I1206 19:02:46.650334   83344 main.go:141] libmachine: (multinode-593099)       <readonly/>
	I1206 19:02:46.650362   83344 main.go:141] libmachine: (multinode-593099)     </disk>
	I1206 19:02:46.650376   83344 main.go:141] libmachine: (multinode-593099)     <disk type='file' device='disk'>
	I1206 19:02:46.650390   83344 main.go:141] libmachine: (multinode-593099)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1206 19:02:46.650409   83344 main.go:141] libmachine: (multinode-593099)       <source file='/home/jenkins/minikube-integration/17740-63652/.minikube/machines/multinode-593099/multinode-593099.rawdisk'/>
	I1206 19:02:46.650421   83344 main.go:141] libmachine: (multinode-593099)       <target dev='hda' bus='virtio'/>
	I1206 19:02:46.650433   83344 main.go:141] libmachine: (multinode-593099)     </disk>
	I1206 19:02:46.650449   83344 main.go:141] libmachine: (multinode-593099)     <interface type='network'>
	I1206 19:02:46.650464   83344 main.go:141] libmachine: (multinode-593099)       <source network='mk-multinode-593099'/>
	I1206 19:02:46.650477   83344 main.go:141] libmachine: (multinode-593099)       <model type='virtio'/>
	I1206 19:02:46.650491   83344 main.go:141] libmachine: (multinode-593099)     </interface>
	I1206 19:02:46.650503   83344 main.go:141] libmachine: (multinode-593099)     <interface type='network'>
	I1206 19:02:46.650516   83344 main.go:141] libmachine: (multinode-593099)       <source network='default'/>
	I1206 19:02:46.650529   83344 main.go:141] libmachine: (multinode-593099)       <model type='virtio'/>
	I1206 19:02:46.650542   83344 main.go:141] libmachine: (multinode-593099)     </interface>
	I1206 19:02:46.650555   83344 main.go:141] libmachine: (multinode-593099)     <serial type='pty'>
	I1206 19:02:46.650578   83344 main.go:141] libmachine: (multinode-593099)       <target port='0'/>
	I1206 19:02:46.650590   83344 main.go:141] libmachine: (multinode-593099)     </serial>
	I1206 19:02:46.650608   83344 main.go:141] libmachine: (multinode-593099)     <console type='pty'>
	I1206 19:02:46.650623   83344 main.go:141] libmachine: (multinode-593099)       <target type='serial' port='0'/>
	I1206 19:02:46.650631   83344 main.go:141] libmachine: (multinode-593099)     </console>
	I1206 19:02:46.650636   83344 main.go:141] libmachine: (multinode-593099)     <rng model='virtio'>
	I1206 19:02:46.650644   83344 main.go:141] libmachine: (multinode-593099)       <backend model='random'>/dev/random</backend>
	I1206 19:02:46.650652   83344 main.go:141] libmachine: (multinode-593099)     </rng>
	I1206 19:02:46.650665   83344 main.go:141] libmachine: (multinode-593099)     
	I1206 19:02:46.650675   83344 main.go:141] libmachine: (multinode-593099)     
	I1206 19:02:46.650683   83344 main.go:141] libmachine: (multinode-593099)   </devices>
	I1206 19:02:46.650689   83344 main.go:141] libmachine: (multinode-593099) </domain>
	I1206 19:02:46.650698   83344 main.go:141] libmachine: (multinode-593099) 
	I1206 19:02:46.655537   83344 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:15:4f:b2 in network default
	I1206 19:02:46.656189   83344 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:02:46.656204   83344 main.go:141] libmachine: (multinode-593099) Ensuring networks are active...
	I1206 19:02:46.656977   83344 main.go:141] libmachine: (multinode-593099) Ensuring network default is active
	I1206 19:02:46.657338   83344 main.go:141] libmachine: (multinode-593099) Ensuring network mk-multinode-593099 is active
	I1206 19:02:46.657958   83344 main.go:141] libmachine: (multinode-593099) Getting domain xml...
	I1206 19:02:46.658756   83344 main.go:141] libmachine: (multinode-593099) Creating domain...
	I1206 19:02:47.898880   83344 main.go:141] libmachine: (multinode-593099) Waiting to get IP...
	I1206 19:02:47.899683   83344 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:02:47.900173   83344 main.go:141] libmachine: (multinode-593099) DBG | unable to find current IP address of domain multinode-593099 in network mk-multinode-593099
	I1206 19:02:47.900195   83344 main.go:141] libmachine: (multinode-593099) DBG | I1206 19:02:47.900139   83367 retry.go:31] will retry after 233.228431ms: waiting for machine to come up
	I1206 19:02:48.134842   83344 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:02:48.135495   83344 main.go:141] libmachine: (multinode-593099) DBG | unable to find current IP address of domain multinode-593099 in network mk-multinode-593099
	I1206 19:02:48.135519   83344 main.go:141] libmachine: (multinode-593099) DBG | I1206 19:02:48.135432   83367 retry.go:31] will retry after 358.767313ms: waiting for machine to come up
	I1206 19:02:48.496172   83344 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:02:48.496557   83344 main.go:141] libmachine: (multinode-593099) DBG | unable to find current IP address of domain multinode-593099 in network mk-multinode-593099
	I1206 19:02:48.496589   83344 main.go:141] libmachine: (multinode-593099) DBG | I1206 19:02:48.496498   83367 retry.go:31] will retry after 377.077748ms: waiting for machine to come up
	I1206 19:02:48.874967   83344 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:02:48.875432   83344 main.go:141] libmachine: (multinode-593099) DBG | unable to find current IP address of domain multinode-593099 in network mk-multinode-593099
	I1206 19:02:48.875458   83344 main.go:141] libmachine: (multinode-593099) DBG | I1206 19:02:48.875395   83367 retry.go:31] will retry after 606.979057ms: waiting for machine to come up
	I1206 19:02:49.484360   83344 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:02:49.484738   83344 main.go:141] libmachine: (multinode-593099) DBG | unable to find current IP address of domain multinode-593099 in network mk-multinode-593099
	I1206 19:02:49.484762   83344 main.go:141] libmachine: (multinode-593099) DBG | I1206 19:02:49.484698   83367 retry.go:31] will retry after 478.552135ms: waiting for machine to come up
	I1206 19:02:49.964363   83344 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:02:49.964895   83344 main.go:141] libmachine: (multinode-593099) DBG | unable to find current IP address of domain multinode-593099 in network mk-multinode-593099
	I1206 19:02:49.964936   83344 main.go:141] libmachine: (multinode-593099) DBG | I1206 19:02:49.964823   83367 retry.go:31] will retry after 700.513329ms: waiting for machine to come up
	I1206 19:02:50.666836   83344 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:02:50.667306   83344 main.go:141] libmachine: (multinode-593099) DBG | unable to find current IP address of domain multinode-593099 in network mk-multinode-593099
	I1206 19:02:50.667334   83344 main.go:141] libmachine: (multinode-593099) DBG | I1206 19:02:50.667250   83367 retry.go:31] will retry after 915.538595ms: waiting for machine to come up
	I1206 19:02:51.583934   83344 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:02:51.584382   83344 main.go:141] libmachine: (multinode-593099) DBG | unable to find current IP address of domain multinode-593099 in network mk-multinode-593099
	I1206 19:02:51.584416   83344 main.go:141] libmachine: (multinode-593099) DBG | I1206 19:02:51.584328   83367 retry.go:31] will retry after 1.174266542s: waiting for machine to come up
	I1206 19:02:52.760707   83344 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:02:52.761140   83344 main.go:141] libmachine: (multinode-593099) DBG | unable to find current IP address of domain multinode-593099 in network mk-multinode-593099
	I1206 19:02:52.761168   83344 main.go:141] libmachine: (multinode-593099) DBG | I1206 19:02:52.761084   83367 retry.go:31] will retry after 1.483020763s: waiting for machine to come up
	I1206 19:02:54.245769   83344 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:02:54.246213   83344 main.go:141] libmachine: (multinode-593099) DBG | unable to find current IP address of domain multinode-593099 in network mk-multinode-593099
	I1206 19:02:54.246246   83344 main.go:141] libmachine: (multinode-593099) DBG | I1206 19:02:54.246135   83367 retry.go:31] will retry after 1.705445185s: waiting for machine to come up
	I1206 19:02:55.954074   83344 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:02:55.954462   83344 main.go:141] libmachine: (multinode-593099) DBG | unable to find current IP address of domain multinode-593099 in network mk-multinode-593099
	I1206 19:02:55.954489   83344 main.go:141] libmachine: (multinode-593099) DBG | I1206 19:02:55.954400   83367 retry.go:31] will retry after 2.714562433s: waiting for machine to come up
	I1206 19:02:58.670639   83344 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:02:58.671128   83344 main.go:141] libmachine: (multinode-593099) DBG | unable to find current IP address of domain multinode-593099 in network mk-multinode-593099
	I1206 19:02:58.671160   83344 main.go:141] libmachine: (multinode-593099) DBG | I1206 19:02:58.671071   83367 retry.go:31] will retry after 3.462412387s: waiting for machine to come up
	I1206 19:03:02.135463   83344 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:03:02.135811   83344 main.go:141] libmachine: (multinode-593099) DBG | unable to find current IP address of domain multinode-593099 in network mk-multinode-593099
	I1206 19:03:02.135839   83344 main.go:141] libmachine: (multinode-593099) DBG | I1206 19:03:02.135747   83367 retry.go:31] will retry after 2.811428611s: waiting for machine to come up
	I1206 19:03:04.950638   83344 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:03:04.950980   83344 main.go:141] libmachine: (multinode-593099) DBG | unable to find current IP address of domain multinode-593099 in network mk-multinode-593099
	I1206 19:03:04.951004   83344 main.go:141] libmachine: (multinode-593099) DBG | I1206 19:03:04.950955   83367 retry.go:31] will retry after 3.797656859s: waiting for machine to come up
	I1206 19:03:08.751673   83344 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:03:08.752091   83344 main.go:141] libmachine: (multinode-593099) Found IP for machine: 192.168.39.125
	I1206 19:03:08.752122   83344 main.go:141] libmachine: (multinode-593099) Reserving static IP address...
	I1206 19:03:08.752138   83344 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has current primary IP address 192.168.39.125 and MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:03:08.752487   83344 main.go:141] libmachine: (multinode-593099) DBG | unable to find host DHCP lease matching {name: "multinode-593099", mac: "52:54:00:37:16:c6", ip: "192.168.39.125"} in network mk-multinode-593099
	I1206 19:03:08.828858   83344 main.go:141] libmachine: (multinode-593099) Reserved static IP address: 192.168.39.125
	I1206 19:03:08.828894   83344 main.go:141] libmachine: (multinode-593099) Waiting for SSH to be available...
	I1206 19:03:08.828906   83344 main.go:141] libmachine: (multinode-593099) DBG | Getting to WaitForSSH function...
	I1206 19:03:08.832391   83344 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:03:08.832778   83344 main.go:141] libmachine: (multinode-593099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:c6", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:03:01 +0000 UTC Type:0 Mac:52:54:00:37:16:c6 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:minikube Clientid:01:52:54:00:37:16:c6}
	I1206 19:03:08.832811   83344 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined IP address 192.168.39.125 and MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:03:08.832917   83344 main.go:141] libmachine: (multinode-593099) DBG | Using SSH client type: external
	I1206 19:03:08.832952   83344 main.go:141] libmachine: (multinode-593099) DBG | Using SSH private key: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/multinode-593099/id_rsa (-rw-------)
	I1206 19:03:08.833008   83344 main.go:141] libmachine: (multinode-593099) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.125 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17740-63652/.minikube/machines/multinode-593099/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1206 19:03:08.833030   83344 main.go:141] libmachine: (multinode-593099) DBG | About to run SSH command:
	I1206 19:03:08.833065   83344 main.go:141] libmachine: (multinode-593099) DBG | exit 0
	I1206 19:03:08.925327   83344 main.go:141] libmachine: (multinode-593099) DBG | SSH cmd err, output: <nil>: 
	I1206 19:03:08.925528   83344 main.go:141] libmachine: (multinode-593099) KVM machine creation complete!
	I1206 19:03:08.925819   83344 main.go:141] libmachine: (multinode-593099) Calling .GetConfigRaw
	I1206 19:03:08.926376   83344 main.go:141] libmachine: (multinode-593099) Calling .DriverName
	I1206 19:03:08.926559   83344 main.go:141] libmachine: (multinode-593099) Calling .DriverName
	I1206 19:03:08.926770   83344 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1206 19:03:08.926785   83344 main.go:141] libmachine: (multinode-593099) Calling .GetState
	I1206 19:03:08.927960   83344 main.go:141] libmachine: Detecting operating system of created instance...
	I1206 19:03:08.927977   83344 main.go:141] libmachine: Waiting for SSH to be available...
	I1206 19:03:08.927985   83344 main.go:141] libmachine: Getting to WaitForSSH function...
	I1206 19:03:08.927997   83344 main.go:141] libmachine: (multinode-593099) Calling .GetSSHHostname
	I1206 19:03:08.930233   83344 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:03:08.930558   83344 main.go:141] libmachine: (multinode-593099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:c6", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:03:01 +0000 UTC Type:0 Mac:52:54:00:37:16:c6 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:multinode-593099 Clientid:01:52:54:00:37:16:c6}
	I1206 19:03:08.930588   83344 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined IP address 192.168.39.125 and MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:03:08.930661   83344 main.go:141] libmachine: (multinode-593099) Calling .GetSSHPort
	I1206 19:03:08.930858   83344 main.go:141] libmachine: (multinode-593099) Calling .GetSSHKeyPath
	I1206 19:03:08.930976   83344 main.go:141] libmachine: (multinode-593099) Calling .GetSSHKeyPath
	I1206 19:03:08.931088   83344 main.go:141] libmachine: (multinode-593099) Calling .GetSSHUsername
	I1206 19:03:08.931212   83344 main.go:141] libmachine: Using SSH client type: native
	I1206 19:03:08.931548   83344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I1206 19:03:08.931560   83344 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1206 19:03:09.052462   83344 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1206 19:03:09.052491   83344 main.go:141] libmachine: Detecting the provisioner...
	I1206 19:03:09.052503   83344 main.go:141] libmachine: (multinode-593099) Calling .GetSSHHostname
	I1206 19:03:09.055312   83344 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:03:09.055631   83344 main.go:141] libmachine: (multinode-593099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:c6", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:03:01 +0000 UTC Type:0 Mac:52:54:00:37:16:c6 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:multinode-593099 Clientid:01:52:54:00:37:16:c6}
	I1206 19:03:09.055661   83344 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined IP address 192.168.39.125 and MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:03:09.055816   83344 main.go:141] libmachine: (multinode-593099) Calling .GetSSHPort
	I1206 19:03:09.056060   83344 main.go:141] libmachine: (multinode-593099) Calling .GetSSHKeyPath
	I1206 19:03:09.056250   83344 main.go:141] libmachine: (multinode-593099) Calling .GetSSHKeyPath
	I1206 19:03:09.056410   83344 main.go:141] libmachine: (multinode-593099) Calling .GetSSHUsername
	I1206 19:03:09.056592   83344 main.go:141] libmachine: Using SSH client type: native
	I1206 19:03:09.056930   83344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I1206 19:03:09.056945   83344 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1206 19:03:09.177997   83344 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gf888a99-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1206 19:03:09.178088   83344 main.go:141] libmachine: found compatible host: buildroot
	I1206 19:03:09.178100   83344 main.go:141] libmachine: Provisioning with buildroot...
	I1206 19:03:09.178109   83344 main.go:141] libmachine: (multinode-593099) Calling .GetMachineName
	I1206 19:03:09.178413   83344 buildroot.go:166] provisioning hostname "multinode-593099"
	I1206 19:03:09.178442   83344 main.go:141] libmachine: (multinode-593099) Calling .GetMachineName
	I1206 19:03:09.178629   83344 main.go:141] libmachine: (multinode-593099) Calling .GetSSHHostname
	I1206 19:03:09.180928   83344 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:03:09.181301   83344 main.go:141] libmachine: (multinode-593099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:c6", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:03:01 +0000 UTC Type:0 Mac:52:54:00:37:16:c6 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:multinode-593099 Clientid:01:52:54:00:37:16:c6}
	I1206 19:03:09.181339   83344 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined IP address 192.168.39.125 and MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:03:09.181434   83344 main.go:141] libmachine: (multinode-593099) Calling .GetSSHPort
	I1206 19:03:09.181626   83344 main.go:141] libmachine: (multinode-593099) Calling .GetSSHKeyPath
	I1206 19:03:09.181794   83344 main.go:141] libmachine: (multinode-593099) Calling .GetSSHKeyPath
	I1206 19:03:09.181934   83344 main.go:141] libmachine: (multinode-593099) Calling .GetSSHUsername
	I1206 19:03:09.182108   83344 main.go:141] libmachine: Using SSH client type: native
	I1206 19:03:09.182429   83344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I1206 19:03:09.182451   83344 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-593099 && echo "multinode-593099" | sudo tee /etc/hostname
	I1206 19:03:09.318099   83344 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-593099
	
	I1206 19:03:09.318131   83344 main.go:141] libmachine: (multinode-593099) Calling .GetSSHHostname
	I1206 19:03:09.321060   83344 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:03:09.321503   83344 main.go:141] libmachine: (multinode-593099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:c6", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:03:01 +0000 UTC Type:0 Mac:52:54:00:37:16:c6 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:multinode-593099 Clientid:01:52:54:00:37:16:c6}
	I1206 19:03:09.321549   83344 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined IP address 192.168.39.125 and MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:03:09.321702   83344 main.go:141] libmachine: (multinode-593099) Calling .GetSSHPort
	I1206 19:03:09.321919   83344 main.go:141] libmachine: (multinode-593099) Calling .GetSSHKeyPath
	I1206 19:03:09.322084   83344 main.go:141] libmachine: (multinode-593099) Calling .GetSSHKeyPath
	I1206 19:03:09.322208   83344 main.go:141] libmachine: (multinode-593099) Calling .GetSSHUsername
	I1206 19:03:09.322380   83344 main.go:141] libmachine: Using SSH client type: native
	I1206 19:03:09.322769   83344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I1206 19:03:09.322790   83344 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-593099' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-593099/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-593099' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 19:03:09.453677   83344 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1206 19:03:09.453711   83344 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17740-63652/.minikube CaCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17740-63652/.minikube}
	I1206 19:03:09.453743   83344 buildroot.go:174] setting up certificates
	I1206 19:03:09.453755   83344 provision.go:83] configureAuth start
	I1206 19:03:09.453765   83344 main.go:141] libmachine: (multinode-593099) Calling .GetMachineName
	I1206 19:03:09.454078   83344 main.go:141] libmachine: (multinode-593099) Calling .GetIP
	I1206 19:03:09.456844   83344 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:03:09.457181   83344 main.go:141] libmachine: (multinode-593099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:c6", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:03:01 +0000 UTC Type:0 Mac:52:54:00:37:16:c6 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:multinode-593099 Clientid:01:52:54:00:37:16:c6}
	I1206 19:03:09.457203   83344 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined IP address 192.168.39.125 and MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:03:09.457379   83344 main.go:141] libmachine: (multinode-593099) Calling .GetSSHHostname
	I1206 19:03:09.459338   83344 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:03:09.459728   83344 main.go:141] libmachine: (multinode-593099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:c6", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:03:01 +0000 UTC Type:0 Mac:52:54:00:37:16:c6 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:multinode-593099 Clientid:01:52:54:00:37:16:c6}
	I1206 19:03:09.459759   83344 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined IP address 192.168.39.125 and MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:03:09.459891   83344 provision.go:138] copyHostCerts
	I1206 19:03:09.459950   83344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem
	I1206 19:03:09.459996   83344 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem, removing ...
	I1206 19:03:09.460013   83344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem
	I1206 19:03:09.460074   83344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem (1082 bytes)
	I1206 19:03:09.460158   83344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem
	I1206 19:03:09.460175   83344 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem, removing ...
	I1206 19:03:09.460181   83344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem
	I1206 19:03:09.460200   83344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem (1123 bytes)
	I1206 19:03:09.460264   83344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem
	I1206 19:03:09.460286   83344 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem, removing ...
	I1206 19:03:09.460299   83344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem
	I1206 19:03:09.460332   83344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem (1679 bytes)
	I1206 19:03:09.460410   83344 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem org=jenkins.multinode-593099 san=[192.168.39.125 192.168.39.125 localhost 127.0.0.1 minikube multinode-593099]
	I1206 19:03:09.596561   83344 provision.go:172] copyRemoteCerts
	I1206 19:03:09.596624   83344 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 19:03:09.596650   83344 main.go:141] libmachine: (multinode-593099) Calling .GetSSHHostname
	I1206 19:03:09.599148   83344 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:03:09.599496   83344 main.go:141] libmachine: (multinode-593099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:c6", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:03:01 +0000 UTC Type:0 Mac:52:54:00:37:16:c6 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:multinode-593099 Clientid:01:52:54:00:37:16:c6}
	I1206 19:03:09.599529   83344 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined IP address 192.168.39.125 and MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:03:09.599707   83344 main.go:141] libmachine: (multinode-593099) Calling .GetSSHPort
	I1206 19:03:09.599905   83344 main.go:141] libmachine: (multinode-593099) Calling .GetSSHKeyPath
	I1206 19:03:09.600067   83344 main.go:141] libmachine: (multinode-593099) Calling .GetSSHUsername
	I1206 19:03:09.600165   83344 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/multinode-593099/id_rsa Username:docker}
	I1206 19:03:09.690049   83344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1206 19:03:09.690126   83344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 19:03:09.713441   83344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1206 19:03:09.713554   83344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1206 19:03:09.736357   83344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1206 19:03:09.736456   83344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 19:03:09.759576   83344 provision.go:86] duration metric: configureAuth took 305.807105ms
	I1206 19:03:09.759605   83344 buildroot.go:189] setting minikube options for container-runtime
	I1206 19:03:09.759806   83344 config.go:182] Loaded profile config "multinode-593099": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 19:03:09.759904   83344 main.go:141] libmachine: (multinode-593099) Calling .GetSSHHostname
	I1206 19:03:09.762775   83344 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:03:09.763117   83344 main.go:141] libmachine: (multinode-593099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:c6", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:03:01 +0000 UTC Type:0 Mac:52:54:00:37:16:c6 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:multinode-593099 Clientid:01:52:54:00:37:16:c6}
	I1206 19:03:09.763147   83344 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined IP address 192.168.39.125 and MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:03:09.763327   83344 main.go:141] libmachine: (multinode-593099) Calling .GetSSHPort
	I1206 19:03:09.763501   83344 main.go:141] libmachine: (multinode-593099) Calling .GetSSHKeyPath
	I1206 19:03:09.763659   83344 main.go:141] libmachine: (multinode-593099) Calling .GetSSHKeyPath
	I1206 19:03:09.763806   83344 main.go:141] libmachine: (multinode-593099) Calling .GetSSHUsername
	I1206 19:03:09.763957   83344 main.go:141] libmachine: Using SSH client type: native
	I1206 19:03:09.764339   83344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I1206 19:03:09.764357   83344 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 19:03:10.079679   83344 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 19:03:10.079709   83344 main.go:141] libmachine: Checking connection to Docker...
	I1206 19:03:10.079736   83344 main.go:141] libmachine: (multinode-593099) Calling .GetURL
	I1206 19:03:10.081246   83344 main.go:141] libmachine: (multinode-593099) DBG | Using libvirt version 6000000
	I1206 19:03:10.083315   83344 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:03:10.083649   83344 main.go:141] libmachine: (multinode-593099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:c6", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:03:01 +0000 UTC Type:0 Mac:52:54:00:37:16:c6 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:multinode-593099 Clientid:01:52:54:00:37:16:c6}
	I1206 19:03:10.083678   83344 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined IP address 192.168.39.125 and MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:03:10.083812   83344 main.go:141] libmachine: Docker is up and running!
	I1206 19:03:10.083825   83344 main.go:141] libmachine: Reticulating splines...
	I1206 19:03:10.083833   83344 client.go:171] LocalClient.Create took 23.925973968s
	I1206 19:03:10.083862   83344 start.go:167] duration metric: libmachine.API.Create for "multinode-593099" took 23.926071495s
	I1206 19:03:10.083876   83344 start.go:300] post-start starting for "multinode-593099" (driver="kvm2")
	I1206 19:03:10.083892   83344 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 19:03:10.083915   83344 main.go:141] libmachine: (multinode-593099) Calling .DriverName
	I1206 19:03:10.084162   83344 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 19:03:10.084188   83344 main.go:141] libmachine: (multinode-593099) Calling .GetSSHHostname
	I1206 19:03:10.086424   83344 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:03:10.086705   83344 main.go:141] libmachine: (multinode-593099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:c6", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:03:01 +0000 UTC Type:0 Mac:52:54:00:37:16:c6 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:multinode-593099 Clientid:01:52:54:00:37:16:c6}
	I1206 19:03:10.086734   83344 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined IP address 192.168.39.125 and MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:03:10.086859   83344 main.go:141] libmachine: (multinode-593099) Calling .GetSSHPort
	I1206 19:03:10.087041   83344 main.go:141] libmachine: (multinode-593099) Calling .GetSSHKeyPath
	I1206 19:03:10.087212   83344 main.go:141] libmachine: (multinode-593099) Calling .GetSSHUsername
	I1206 19:03:10.087390   83344 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/multinode-593099/id_rsa Username:docker}
	I1206 19:03:10.179639   83344 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 19:03:10.184147   83344 command_runner.go:130] > NAME=Buildroot
	I1206 19:03:10.184176   83344 command_runner.go:130] > VERSION=2021.02.12-1-gf888a99-dirty
	I1206 19:03:10.184180   83344 command_runner.go:130] > ID=buildroot
	I1206 19:03:10.184186   83344 command_runner.go:130] > VERSION_ID=2021.02.12
	I1206 19:03:10.184191   83344 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1206 19:03:10.184234   83344 info.go:137] Remote host: Buildroot 2021.02.12
	I1206 19:03:10.184251   83344 filesync.go:126] Scanning /home/jenkins/minikube-integration/17740-63652/.minikube/addons for local assets ...
	I1206 19:03:10.184327   83344 filesync.go:126] Scanning /home/jenkins/minikube-integration/17740-63652/.minikube/files for local assets ...
	I1206 19:03:10.184447   83344 filesync.go:149] local asset: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem -> 708342.pem in /etc/ssl/certs
	I1206 19:03:10.184461   83344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem -> /etc/ssl/certs/708342.pem
	I1206 19:03:10.184574   83344 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 19:03:10.193888   83344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem --> /etc/ssl/certs/708342.pem (1708 bytes)
	I1206 19:03:10.216291   83344 start.go:303] post-start completed in 132.389454ms
	I1206 19:03:10.216343   83344 main.go:141] libmachine: (multinode-593099) Calling .GetConfigRaw
	I1206 19:03:10.217001   83344 main.go:141] libmachine: (multinode-593099) Calling .GetIP
	I1206 19:03:10.219580   83344 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:03:10.219949   83344 main.go:141] libmachine: (multinode-593099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:c6", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:03:01 +0000 UTC Type:0 Mac:52:54:00:37:16:c6 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:multinode-593099 Clientid:01:52:54:00:37:16:c6}
	I1206 19:03:10.219990   83344 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined IP address 192.168.39.125 and MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:03:10.220189   83344 profile.go:148] Saving config to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/config.json ...
	I1206 19:03:10.220382   83344 start.go:128] duration metric: createHost completed in 24.08083583s
	I1206 19:03:10.220408   83344 main.go:141] libmachine: (multinode-593099) Calling .GetSSHHostname
	I1206 19:03:10.222401   83344 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:03:10.222695   83344 main.go:141] libmachine: (multinode-593099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:c6", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:03:01 +0000 UTC Type:0 Mac:52:54:00:37:16:c6 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:multinode-593099 Clientid:01:52:54:00:37:16:c6}
	I1206 19:03:10.222726   83344 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined IP address 192.168.39.125 and MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:03:10.222836   83344 main.go:141] libmachine: (multinode-593099) Calling .GetSSHPort
	I1206 19:03:10.223054   83344 main.go:141] libmachine: (multinode-593099) Calling .GetSSHKeyPath
	I1206 19:03:10.223218   83344 main.go:141] libmachine: (multinode-593099) Calling .GetSSHKeyPath
	I1206 19:03:10.223373   83344 main.go:141] libmachine: (multinode-593099) Calling .GetSSHUsername
	I1206 19:03:10.223535   83344 main.go:141] libmachine: Using SSH client type: native
	I1206 19:03:10.223847   83344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I1206 19:03:10.223858   83344 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1206 19:03:10.345967   83344 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701889390.327790162
	
	I1206 19:03:10.345993   83344 fix.go:206] guest clock: 1701889390.327790162
	I1206 19:03:10.346002   83344 fix.go:219] Guest: 2023-12-06 19:03:10.327790162 +0000 UTC Remote: 2023-12-06 19:03:10.22039408 +0000 UTC m=+24.210512414 (delta=107.396082ms)
	I1206 19:03:10.346027   83344 fix.go:190] guest clock delta is within tolerance: 107.396082ms
	I1206 19:03:10.346034   83344 start.go:83] releasing machines lock for "multinode-593099", held for 24.206680718s
	I1206 19:03:10.346055   83344 main.go:141] libmachine: (multinode-593099) Calling .DriverName
	I1206 19:03:10.346380   83344 main.go:141] libmachine: (multinode-593099) Calling .GetIP
	I1206 19:03:10.348965   83344 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:03:10.349318   83344 main.go:141] libmachine: (multinode-593099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:c6", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:03:01 +0000 UTC Type:0 Mac:52:54:00:37:16:c6 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:multinode-593099 Clientid:01:52:54:00:37:16:c6}
	I1206 19:03:10.349350   83344 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined IP address 192.168.39.125 and MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:03:10.349523   83344 main.go:141] libmachine: (multinode-593099) Calling .DriverName
	I1206 19:03:10.349984   83344 main.go:141] libmachine: (multinode-593099) Calling .DriverName
	I1206 19:03:10.350157   83344 main.go:141] libmachine: (multinode-593099) Calling .DriverName
	I1206 19:03:10.350264   83344 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 19:03:10.350304   83344 main.go:141] libmachine: (multinode-593099) Calling .GetSSHHostname
	I1206 19:03:10.350425   83344 ssh_runner.go:195] Run: cat /version.json
	I1206 19:03:10.350458   83344 main.go:141] libmachine: (multinode-593099) Calling .GetSSHHostname
	I1206 19:03:10.352907   83344 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:03:10.353023   83344 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:03:10.353256   83344 main.go:141] libmachine: (multinode-593099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:c6", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:03:01 +0000 UTC Type:0 Mac:52:54:00:37:16:c6 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:multinode-593099 Clientid:01:52:54:00:37:16:c6}
	I1206 19:03:10.353285   83344 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined IP address 192.168.39.125 and MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:03:10.353375   83344 main.go:141] libmachine: (multinode-593099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:c6", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:03:01 +0000 UTC Type:0 Mac:52:54:00:37:16:c6 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:multinode-593099 Clientid:01:52:54:00:37:16:c6}
	I1206 19:03:10.353438   83344 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined IP address 192.168.39.125 and MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:03:10.353484   83344 main.go:141] libmachine: (multinode-593099) Calling .GetSSHPort
	I1206 19:03:10.353602   83344 main.go:141] libmachine: (multinode-593099) Calling .GetSSHPort
	I1206 19:03:10.353688   83344 main.go:141] libmachine: (multinode-593099) Calling .GetSSHKeyPath
	I1206 19:03:10.353776   83344 main.go:141] libmachine: (multinode-593099) Calling .GetSSHKeyPath
	I1206 19:03:10.353840   83344 main.go:141] libmachine: (multinode-593099) Calling .GetSSHUsername
	I1206 19:03:10.353897   83344 main.go:141] libmachine: (multinode-593099) Calling .GetSSHUsername
	I1206 19:03:10.353957   83344 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/multinode-593099/id_rsa Username:docker}
	I1206 19:03:10.354007   83344 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/multinode-593099/id_rsa Username:docker}
	I1206 19:03:10.438561   83344 command_runner.go:130] > {"iso_version": "v1.32.1-1701387192-17703", "kicbase_version": "v0.0.42-1700142204-17634", "minikube_version": "v1.32.0", "commit": "196015715c4eb12e436d5bb69e555ba604cda88e"}
	I1206 19:03:10.438694   83344 ssh_runner.go:195] Run: systemctl --version
	I1206 19:03:10.469543   83344 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1206 19:03:10.469621   83344 command_runner.go:130] > systemd 247 (247)
	I1206 19:03:10.469651   83344 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1206 19:03:10.469740   83344 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 19:03:10.630932   83344 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1206 19:03:10.636716   83344 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1206 19:03:10.636756   83344 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 19:03:10.636813   83344 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 19:03:10.650864   83344 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1206 19:03:10.650900   83344 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1206 19:03:10.650910   83344 start.go:475] detecting cgroup driver to use...
	I1206 19:03:10.650966   83344 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 19:03:10.664152   83344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 19:03:10.675833   83344 docker.go:203] disabling cri-docker service (if available) ...
	I1206 19:03:10.675887   83344 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 19:03:10.687872   83344 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 19:03:10.701151   83344 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 19:03:10.714380   83344 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I1206 19:03:10.801869   83344 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 19:03:10.918430   83344 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1206 19:03:10.918472   83344 docker.go:219] disabling docker service ...
	I1206 19:03:10.918523   83344 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 19:03:10.931045   83344 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 19:03:10.942168   83344 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I1206 19:03:10.942350   83344 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 19:03:11.051890   83344 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1206 19:03:11.051979   83344 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 19:03:11.152151   83344 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I1206 19:03:11.152184   83344 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1206 19:03:11.152260   83344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 19:03:11.164506   83344 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 19:03:11.181205   83344 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1206 19:03:11.181588   83344 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1206 19:03:11.181656   83344 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:03:11.190855   83344 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 19:03:11.190920   83344 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:03:11.200131   83344 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:03:11.209224   83344 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:03:11.218324   83344 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 19:03:11.228013   83344 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 19:03:11.236161   83344 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1206 19:03:11.236211   83344 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1206 19:03:11.236262   83344 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1206 19:03:11.248520   83344 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 19:03:11.258332   83344 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 19:03:11.357789   83344 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 19:03:11.523553   83344 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 19:03:11.523645   83344 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 19:03:11.532214   83344 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1206 19:03:11.532245   83344 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1206 19:03:11.532254   83344 command_runner.go:130] > Device: 16h/22d	Inode: 758         Links: 1
	I1206 19:03:11.532266   83344 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1206 19:03:11.532272   83344 command_runner.go:130] > Access: 2023-12-06 19:03:11.492244744 +0000
	I1206 19:03:11.532282   83344 command_runner.go:130] > Modify: 2023-12-06 19:03:11.492244744 +0000
	I1206 19:03:11.532289   83344 command_runner.go:130] > Change: 2023-12-06 19:03:11.492244744 +0000
	I1206 19:03:11.532293   83344 command_runner.go:130] >  Birth: -
	I1206 19:03:11.532319   83344 start.go:543] Will wait 60s for crictl version
	I1206 19:03:11.532362   83344 ssh_runner.go:195] Run: which crictl
	I1206 19:03:11.536089   83344 command_runner.go:130] > /usr/bin/crictl
	I1206 19:03:11.536161   83344 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1206 19:03:11.570226   83344 command_runner.go:130] > Version:  0.1.0
	I1206 19:03:11.570258   83344 command_runner.go:130] > RuntimeName:  cri-o
	I1206 19:03:11.570289   83344 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1206 19:03:11.570295   83344 command_runner.go:130] > RuntimeApiVersion:  v1
	I1206 19:03:11.572007   83344 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1206 19:03:11.572138   83344 ssh_runner.go:195] Run: crio --version
	I1206 19:03:11.620791   83344 command_runner.go:130] > crio version 1.24.1
	I1206 19:03:11.620823   83344 command_runner.go:130] > Version:          1.24.1
	I1206 19:03:11.620830   83344 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1206 19:03:11.620834   83344 command_runner.go:130] > GitTreeState:     dirty
	I1206 19:03:11.620840   83344 command_runner.go:130] > BuildDate:        2023-12-01T05:08:03Z
	I1206 19:03:11.620845   83344 command_runner.go:130] > GoVersion:        go1.19.9
	I1206 19:03:11.620849   83344 command_runner.go:130] > Compiler:         gc
	I1206 19:03:11.620854   83344 command_runner.go:130] > Platform:         linux/amd64
	I1206 19:03:11.620862   83344 command_runner.go:130] > Linkmode:         dynamic
	I1206 19:03:11.620873   83344 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1206 19:03:11.620897   83344 command_runner.go:130] > SeccompEnabled:   true
	I1206 19:03:11.620904   83344 command_runner.go:130] > AppArmorEnabled:  false
	I1206 19:03:11.620989   83344 ssh_runner.go:195] Run: crio --version
	I1206 19:03:11.664107   83344 command_runner.go:130] > crio version 1.24.1
	I1206 19:03:11.664137   83344 command_runner.go:130] > Version:          1.24.1
	I1206 19:03:11.664147   83344 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1206 19:03:11.664152   83344 command_runner.go:130] > GitTreeState:     dirty
	I1206 19:03:11.664158   83344 command_runner.go:130] > BuildDate:        2023-12-01T05:08:03Z
	I1206 19:03:11.664163   83344 command_runner.go:130] > GoVersion:        go1.19.9
	I1206 19:03:11.664167   83344 command_runner.go:130] > Compiler:         gc
	I1206 19:03:11.664171   83344 command_runner.go:130] > Platform:         linux/amd64
	I1206 19:03:11.664177   83344 command_runner.go:130] > Linkmode:         dynamic
	I1206 19:03:11.664183   83344 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1206 19:03:11.664187   83344 command_runner.go:130] > SeccompEnabled:   true
	I1206 19:03:11.664195   83344 command_runner.go:130] > AppArmorEnabled:  false
	I1206 19:03:11.666391   83344 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1206 19:03:11.667997   83344 main.go:141] libmachine: (multinode-593099) Calling .GetIP
	I1206 19:03:11.670846   83344 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:03:11.671206   83344 main.go:141] libmachine: (multinode-593099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:c6", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:03:01 +0000 UTC Type:0 Mac:52:54:00:37:16:c6 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:multinode-593099 Clientid:01:52:54:00:37:16:c6}
	I1206 19:03:11.671235   83344 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined IP address 192.168.39.125 and MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:03:11.671466   83344 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1206 19:03:11.675749   83344 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 19:03:11.689201   83344 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1206 19:03:11.689295   83344 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 19:03:11.722428   83344 command_runner.go:130] > {
	I1206 19:03:11.722460   83344 command_runner.go:130] >   "images": [
	I1206 19:03:11.722465   83344 command_runner.go:130] >   ]
	I1206 19:03:11.722470   83344 command_runner.go:130] > }
	I1206 19:03:11.723878   83344 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1206 19:03:11.723958   83344 ssh_runner.go:195] Run: which lz4
	I1206 19:03:11.727985   83344 command_runner.go:130] > /usr/bin/lz4
	I1206 19:03:11.728016   83344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1206 19:03:11.728111   83344 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1206 19:03:11.732295   83344 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1206 19:03:11.732441   83344 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1206 19:03:11.732473   83344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1206 19:03:13.612086   83344 crio.go:444] Took 1.884019 seconds to copy over tarball
	I1206 19:03:13.612184   83344 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1206 19:03:16.321350   83344 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.709137326s)
	I1206 19:03:16.321393   83344 crio.go:451] Took 2.709271 seconds to extract the tarball
	I1206 19:03:16.321403   83344 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1206 19:03:16.362189   83344 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 19:03:16.441520   83344 command_runner.go:130] > {
	I1206 19:03:16.441550   83344 command_runner.go:130] >   "images": [
	I1206 19:03:16.441564   83344 command_runner.go:130] >     {
	I1206 19:03:16.441581   83344 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I1206 19:03:16.441589   83344 command_runner.go:130] >       "repoTags": [
	I1206 19:03:16.441600   83344 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1206 19:03:16.441606   83344 command_runner.go:130] >       ],
	I1206 19:03:16.441614   83344 command_runner.go:130] >       "repoDigests": [
	I1206 19:03:16.441627   83344 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1206 19:03:16.441644   83344 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I1206 19:03:16.441653   83344 command_runner.go:130] >       ],
	I1206 19:03:16.441665   83344 command_runner.go:130] >       "size": "65258016",
	I1206 19:03:16.441672   83344 command_runner.go:130] >       "uid": null,
	I1206 19:03:16.441679   83344 command_runner.go:130] >       "username": "",
	I1206 19:03:16.441688   83344 command_runner.go:130] >       "spec": null,
	I1206 19:03:16.441698   83344 command_runner.go:130] >       "pinned": false
	I1206 19:03:16.441705   83344 command_runner.go:130] >     },
	I1206 19:03:16.441712   83344 command_runner.go:130] >     {
	I1206 19:03:16.441725   83344 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1206 19:03:16.441736   83344 command_runner.go:130] >       "repoTags": [
	I1206 19:03:16.441751   83344 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1206 19:03:16.441760   83344 command_runner.go:130] >       ],
	I1206 19:03:16.441770   83344 command_runner.go:130] >       "repoDigests": [
	I1206 19:03:16.441787   83344 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1206 19:03:16.441804   83344 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1206 19:03:16.441814   83344 command_runner.go:130] >       ],
	I1206 19:03:16.441826   83344 command_runner.go:130] >       "size": "31470524",
	I1206 19:03:16.441835   83344 command_runner.go:130] >       "uid": null,
	I1206 19:03:16.441845   83344 command_runner.go:130] >       "username": "",
	I1206 19:03:16.441854   83344 command_runner.go:130] >       "spec": null,
	I1206 19:03:16.441863   83344 command_runner.go:130] >       "pinned": false
	I1206 19:03:16.441871   83344 command_runner.go:130] >     },
	I1206 19:03:16.441880   83344 command_runner.go:130] >     {
	I1206 19:03:16.441892   83344 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I1206 19:03:16.441901   83344 command_runner.go:130] >       "repoTags": [
	I1206 19:03:16.441912   83344 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1206 19:03:16.441920   83344 command_runner.go:130] >       ],
	I1206 19:03:16.441926   83344 command_runner.go:130] >       "repoDigests": [
	I1206 19:03:16.441943   83344 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I1206 19:03:16.441957   83344 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I1206 19:03:16.441966   83344 command_runner.go:130] >       ],
	I1206 19:03:16.441975   83344 command_runner.go:130] >       "size": "53621675",
	I1206 19:03:16.441985   83344 command_runner.go:130] >       "uid": null,
	I1206 19:03:16.441995   83344 command_runner.go:130] >       "username": "",
	I1206 19:03:16.442003   83344 command_runner.go:130] >       "spec": null,
	I1206 19:03:16.442012   83344 command_runner.go:130] >       "pinned": false
	I1206 19:03:16.442020   83344 command_runner.go:130] >     },
	I1206 19:03:16.442029   83344 command_runner.go:130] >     {
	I1206 19:03:16.442038   83344 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I1206 19:03:16.442048   83344 command_runner.go:130] >       "repoTags": [
	I1206 19:03:16.442059   83344 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1206 19:03:16.442068   83344 command_runner.go:130] >       ],
	I1206 19:03:16.442077   83344 command_runner.go:130] >       "repoDigests": [
	I1206 19:03:16.442090   83344 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I1206 19:03:16.442104   83344 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I1206 19:03:16.442122   83344 command_runner.go:130] >       ],
	I1206 19:03:16.442134   83344 command_runner.go:130] >       "size": "295456551",
	I1206 19:03:16.442143   83344 command_runner.go:130] >       "uid": {
	I1206 19:03:16.442153   83344 command_runner.go:130] >         "value": "0"
	I1206 19:03:16.442163   83344 command_runner.go:130] >       },
	I1206 19:03:16.442174   83344 command_runner.go:130] >       "username": "",
	I1206 19:03:16.442184   83344 command_runner.go:130] >       "spec": null,
	I1206 19:03:16.442194   83344 command_runner.go:130] >       "pinned": false
	I1206 19:03:16.442200   83344 command_runner.go:130] >     },
	I1206 19:03:16.442208   83344 command_runner.go:130] >     {
	I1206 19:03:16.442221   83344 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I1206 19:03:16.442232   83344 command_runner.go:130] >       "repoTags": [
	I1206 19:03:16.442243   83344 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I1206 19:03:16.442253   83344 command_runner.go:130] >       ],
	I1206 19:03:16.442263   83344 command_runner.go:130] >       "repoDigests": [
	I1206 19:03:16.442278   83344 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I1206 19:03:16.442293   83344 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I1206 19:03:16.442302   83344 command_runner.go:130] >       ],
	I1206 19:03:16.442313   83344 command_runner.go:130] >       "size": "127226832",
	I1206 19:03:16.442327   83344 command_runner.go:130] >       "uid": {
	I1206 19:03:16.442337   83344 command_runner.go:130] >         "value": "0"
	I1206 19:03:16.442348   83344 command_runner.go:130] >       },
	I1206 19:03:16.442357   83344 command_runner.go:130] >       "username": "",
	I1206 19:03:16.442373   83344 command_runner.go:130] >       "spec": null,
	I1206 19:03:16.442384   83344 command_runner.go:130] >       "pinned": false
	I1206 19:03:16.442390   83344 command_runner.go:130] >     },
	I1206 19:03:16.442399   83344 command_runner.go:130] >     {
	I1206 19:03:16.442413   83344 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I1206 19:03:16.442422   83344 command_runner.go:130] >       "repoTags": [
	I1206 19:03:16.442434   83344 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I1206 19:03:16.442442   83344 command_runner.go:130] >       ],
	I1206 19:03:16.442449   83344 command_runner.go:130] >       "repoDigests": [
	I1206 19:03:16.442465   83344 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I1206 19:03:16.442480   83344 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I1206 19:03:16.442489   83344 command_runner.go:130] >       ],
	I1206 19:03:16.442498   83344 command_runner.go:130] >       "size": "123261750",
	I1206 19:03:16.442507   83344 command_runner.go:130] >       "uid": {
	I1206 19:03:16.442521   83344 command_runner.go:130] >         "value": "0"
	I1206 19:03:16.442531   83344 command_runner.go:130] >       },
	I1206 19:03:16.442541   83344 command_runner.go:130] >       "username": "",
	I1206 19:03:16.442551   83344 command_runner.go:130] >       "spec": null,
	I1206 19:03:16.442561   83344 command_runner.go:130] >       "pinned": false
	I1206 19:03:16.442569   83344 command_runner.go:130] >     },
	I1206 19:03:16.442577   83344 command_runner.go:130] >     {
	I1206 19:03:16.442586   83344 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I1206 19:03:16.442595   83344 command_runner.go:130] >       "repoTags": [
	I1206 19:03:16.442605   83344 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I1206 19:03:16.442613   83344 command_runner.go:130] >       ],
	I1206 19:03:16.442623   83344 command_runner.go:130] >       "repoDigests": [
	I1206 19:03:16.442636   83344 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I1206 19:03:16.442649   83344 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I1206 19:03:16.442658   83344 command_runner.go:130] >       ],
	I1206 19:03:16.442667   83344 command_runner.go:130] >       "size": "74749335",
	I1206 19:03:16.442682   83344 command_runner.go:130] >       "uid": null,
	I1206 19:03:16.442692   83344 command_runner.go:130] >       "username": "",
	I1206 19:03:16.442710   83344 command_runner.go:130] >       "spec": null,
	I1206 19:03:16.442720   83344 command_runner.go:130] >       "pinned": false
	I1206 19:03:16.442730   83344 command_runner.go:130] >     },
	I1206 19:03:16.442738   83344 command_runner.go:130] >     {
	I1206 19:03:16.442753   83344 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I1206 19:03:16.442763   83344 command_runner.go:130] >       "repoTags": [
	I1206 19:03:16.442774   83344 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I1206 19:03:16.442783   83344 command_runner.go:130] >       ],
	I1206 19:03:16.442790   83344 command_runner.go:130] >       "repoDigests": [
	I1206 19:03:16.442891   83344 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I1206 19:03:16.442909   83344 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I1206 19:03:16.442915   83344 command_runner.go:130] >       ],
	I1206 19:03:16.442922   83344 command_runner.go:130] >       "size": "61551410",
	I1206 19:03:16.442932   83344 command_runner.go:130] >       "uid": {
	I1206 19:03:16.442941   83344 command_runner.go:130] >         "value": "0"
	I1206 19:03:16.442949   83344 command_runner.go:130] >       },
	I1206 19:03:16.442957   83344 command_runner.go:130] >       "username": "",
	I1206 19:03:16.442966   83344 command_runner.go:130] >       "spec": null,
	I1206 19:03:16.442977   83344 command_runner.go:130] >       "pinned": false
	I1206 19:03:16.442985   83344 command_runner.go:130] >     },
	I1206 19:03:16.442991   83344 command_runner.go:130] >     {
	I1206 19:03:16.443004   83344 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1206 19:03:16.443015   83344 command_runner.go:130] >       "repoTags": [
	I1206 19:03:16.443022   83344 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1206 19:03:16.443031   83344 command_runner.go:130] >       ],
	I1206 19:03:16.443038   83344 command_runner.go:130] >       "repoDigests": [
	I1206 19:03:16.443052   83344 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1206 19:03:16.443066   83344 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1206 19:03:16.443075   83344 command_runner.go:130] >       ],
	I1206 19:03:16.443085   83344 command_runner.go:130] >       "size": "750414",
	I1206 19:03:16.443093   83344 command_runner.go:130] >       "uid": {
	I1206 19:03:16.443103   83344 command_runner.go:130] >         "value": "65535"
	I1206 19:03:16.443112   83344 command_runner.go:130] >       },
	I1206 19:03:16.443122   83344 command_runner.go:130] >       "username": "",
	I1206 19:03:16.443127   83344 command_runner.go:130] >       "spec": null,
	I1206 19:03:16.443132   83344 command_runner.go:130] >       "pinned": false
	I1206 19:03:16.443144   83344 command_runner.go:130] >     }
	I1206 19:03:16.443153   83344 command_runner.go:130] >   ]
	I1206 19:03:16.443158   83344 command_runner.go:130] > }
	I1206 19:03:16.443356   83344 crio.go:496] all images are preloaded for cri-o runtime.
	I1206 19:03:16.443379   83344 cache_images.go:84] Images are preloaded, skipping loading
	I1206 19:03:16.443449   83344 ssh_runner.go:195] Run: crio config
	I1206 19:03:16.500881   83344 command_runner.go:130] ! time="2023-12-06 19:03:16.491631899Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1206 19:03:16.500934   83344 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1206 19:03:16.507756   83344 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1206 19:03:16.507781   83344 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1206 19:03:16.507800   83344 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1206 19:03:16.507806   83344 command_runner.go:130] > #
	I1206 19:03:16.507816   83344 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1206 19:03:16.507829   83344 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1206 19:03:16.507842   83344 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1206 19:03:16.507859   83344 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1206 19:03:16.507866   83344 command_runner.go:130] > # reload'.
	I1206 19:03:16.507872   83344 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1206 19:03:16.507880   83344 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1206 19:03:16.507889   83344 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1206 19:03:16.507897   83344 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1206 19:03:16.507903   83344 command_runner.go:130] > [crio]
	I1206 19:03:16.507909   83344 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1206 19:03:16.507914   83344 command_runner.go:130] > # containers images, in this directory.
	I1206 19:03:16.507921   83344 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1206 19:03:16.507932   83344 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1206 19:03:16.507939   83344 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1206 19:03:16.507945   83344 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1206 19:03:16.507953   83344 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1206 19:03:16.507958   83344 command_runner.go:130] > storage_driver = "overlay"
	I1206 19:03:16.507964   83344 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1206 19:03:16.507970   83344 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1206 19:03:16.507976   83344 command_runner.go:130] > storage_option = [
	I1206 19:03:16.507981   83344 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1206 19:03:16.507986   83344 command_runner.go:130] > ]
	I1206 19:03:16.507992   83344 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1206 19:03:16.508000   83344 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1206 19:03:16.508012   83344 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1206 19:03:16.508020   83344 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1206 19:03:16.508028   83344 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1206 19:03:16.508035   83344 command_runner.go:130] > # always happen on a node reboot
	I1206 19:03:16.508040   83344 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1206 19:03:16.508047   83344 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1206 19:03:16.508056   83344 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1206 19:03:16.508066   83344 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1206 19:03:16.508074   83344 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1206 19:03:16.508081   83344 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1206 19:03:16.508091   83344 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1206 19:03:16.508097   83344 command_runner.go:130] > # internal_wipe = true
	I1206 19:03:16.508103   83344 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1206 19:03:16.508111   83344 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1206 19:03:16.508119   83344 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1206 19:03:16.508125   83344 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1206 19:03:16.508133   83344 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1206 19:03:16.508138   83344 command_runner.go:130] > [crio.api]
	I1206 19:03:16.508148   83344 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1206 19:03:16.508155   83344 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1206 19:03:16.508160   83344 command_runner.go:130] > # IP address on which the stream server will listen.
	I1206 19:03:16.508167   83344 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1206 19:03:16.508173   83344 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1206 19:03:16.508181   83344 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1206 19:03:16.508185   83344 command_runner.go:130] > # stream_port = "0"
	I1206 19:03:16.508193   83344 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1206 19:03:16.508197   83344 command_runner.go:130] > # stream_enable_tls = false
	I1206 19:03:16.508205   83344 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1206 19:03:16.508210   83344 command_runner.go:130] > # stream_idle_timeout = ""
	I1206 19:03:16.508216   83344 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1206 19:03:16.508224   83344 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1206 19:03:16.508230   83344 command_runner.go:130] > # minutes.
	I1206 19:03:16.508235   83344 command_runner.go:130] > # stream_tls_cert = ""
	I1206 19:03:16.508242   83344 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1206 19:03:16.508249   83344 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1206 19:03:16.508256   83344 command_runner.go:130] > # stream_tls_key = ""
	I1206 19:03:16.508264   83344 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1206 19:03:16.508273   83344 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1206 19:03:16.508280   83344 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1206 19:03:16.508284   83344 command_runner.go:130] > # stream_tls_ca = ""
	I1206 19:03:16.508291   83344 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1206 19:03:16.508297   83344 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1206 19:03:16.508304   83344 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1206 19:03:16.508311   83344 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1206 19:03:16.508330   83344 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1206 19:03:16.508339   83344 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1206 19:03:16.508343   83344 command_runner.go:130] > [crio.runtime]
	I1206 19:03:16.508348   83344 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1206 19:03:16.508354   83344 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1206 19:03:16.508360   83344 command_runner.go:130] > # "nofile=1024:2048"
	I1206 19:03:16.508367   83344 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1206 19:03:16.508373   83344 command_runner.go:130] > # default_ulimits = [
	I1206 19:03:16.508376   83344 command_runner.go:130] > # ]
	I1206 19:03:16.508382   83344 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1206 19:03:16.508390   83344 command_runner.go:130] > # no_pivot = false
	I1206 19:03:16.508397   83344 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1206 19:03:16.508405   83344 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1206 19:03:16.508412   83344 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1206 19:03:16.508418   83344 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1206 19:03:16.508426   83344 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1206 19:03:16.508433   83344 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1206 19:03:16.508439   83344 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1206 19:03:16.508444   83344 command_runner.go:130] > # Cgroup setting for conmon
	I1206 19:03:16.508452   83344 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1206 19:03:16.508458   83344 command_runner.go:130] > conmon_cgroup = "pod"
	I1206 19:03:16.508465   83344 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1206 19:03:16.508472   83344 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1206 19:03:16.508478   83344 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1206 19:03:16.508484   83344 command_runner.go:130] > conmon_env = [
	I1206 19:03:16.508490   83344 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1206 19:03:16.508496   83344 command_runner.go:130] > ]
	I1206 19:03:16.508501   83344 command_runner.go:130] > # Additional environment variables to set for all the
	I1206 19:03:16.508511   83344 command_runner.go:130] > # containers. These are overridden if set in the
	I1206 19:03:16.508518   83344 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1206 19:03:16.508523   83344 command_runner.go:130] > # default_env = [
	I1206 19:03:16.508527   83344 command_runner.go:130] > # ]
	I1206 19:03:16.508535   83344 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1206 19:03:16.508539   83344 command_runner.go:130] > # selinux = false
	I1206 19:03:16.508545   83344 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1206 19:03:16.508554   83344 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1206 19:03:16.508561   83344 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1206 19:03:16.508565   83344 command_runner.go:130] > # seccomp_profile = ""
	I1206 19:03:16.508573   83344 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1206 19:03:16.508581   83344 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1206 19:03:16.508587   83344 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1206 19:03:16.508594   83344 command_runner.go:130] > # which might increase security.
	I1206 19:03:16.508598   83344 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1206 19:03:16.508610   83344 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1206 19:03:16.508619   83344 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1206 19:03:16.508627   83344 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1206 19:03:16.508638   83344 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1206 19:03:16.508645   83344 command_runner.go:130] > # This option supports live configuration reload.
	I1206 19:03:16.508650   83344 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1206 19:03:16.508656   83344 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1206 19:03:16.508663   83344 command_runner.go:130] > # the cgroup blockio controller.
	I1206 19:03:16.508667   83344 command_runner.go:130] > # blockio_config_file = ""
	I1206 19:03:16.508675   83344 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1206 19:03:16.508681   83344 command_runner.go:130] > # irqbalance daemon.
	I1206 19:03:16.508687   83344 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1206 19:03:16.508698   83344 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1206 19:03:16.508706   83344 command_runner.go:130] > # This option supports live configuration reload.
	I1206 19:03:16.508710   83344 command_runner.go:130] > # rdt_config_file = ""
	I1206 19:03:16.508718   83344 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1206 19:03:16.508724   83344 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1206 19:03:16.508730   83344 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1206 19:03:16.508737   83344 command_runner.go:130] > # separate_pull_cgroup = ""
	I1206 19:03:16.508746   83344 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1206 19:03:16.508755   83344 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1206 19:03:16.508763   83344 command_runner.go:130] > # will be added.
	I1206 19:03:16.508769   83344 command_runner.go:130] > # default_capabilities = [
	I1206 19:03:16.508773   83344 command_runner.go:130] > # 	"CHOWN",
	I1206 19:03:16.508780   83344 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1206 19:03:16.508783   83344 command_runner.go:130] > # 	"FSETID",
	I1206 19:03:16.508793   83344 command_runner.go:130] > # 	"FOWNER",
	I1206 19:03:16.508799   83344 command_runner.go:130] > # 	"SETGID",
	I1206 19:03:16.508803   83344 command_runner.go:130] > # 	"SETUID",
	I1206 19:03:16.508809   83344 command_runner.go:130] > # 	"SETPCAP",
	I1206 19:03:16.508813   83344 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1206 19:03:16.508819   83344 command_runner.go:130] > # 	"KILL",
	I1206 19:03:16.508823   83344 command_runner.go:130] > # ]
	I1206 19:03:16.508831   83344 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1206 19:03:16.508838   83344 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1206 19:03:16.508845   83344 command_runner.go:130] > # default_sysctls = [
	I1206 19:03:16.508848   83344 command_runner.go:130] > # ]
	I1206 19:03:16.508855   83344 command_runner.go:130] > # List of devices on the host that a
	I1206 19:03:16.508861   83344 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1206 19:03:16.508869   83344 command_runner.go:130] > # allowed_devices = [
	I1206 19:03:16.508876   83344 command_runner.go:130] > # 	"/dev/fuse",
	I1206 19:03:16.508879   83344 command_runner.go:130] > # ]
	I1206 19:03:16.508886   83344 command_runner.go:130] > # List of additional devices. specified as
	I1206 19:03:16.508893   83344 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1206 19:03:16.508901   83344 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1206 19:03:16.508930   83344 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1206 19:03:16.508938   83344 command_runner.go:130] > # additional_devices = [
	I1206 19:03:16.508941   83344 command_runner.go:130] > # ]
	I1206 19:03:16.508946   83344 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1206 19:03:16.508950   83344 command_runner.go:130] > # cdi_spec_dirs = [
	I1206 19:03:16.508954   83344 command_runner.go:130] > # 	"/etc/cdi",
	I1206 19:03:16.508959   83344 command_runner.go:130] > # 	"/var/run/cdi",
	I1206 19:03:16.508964   83344 command_runner.go:130] > # ]
	I1206 19:03:16.508970   83344 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1206 19:03:16.508978   83344 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1206 19:03:16.508985   83344 command_runner.go:130] > # Defaults to false.
	I1206 19:03:16.508990   83344 command_runner.go:130] > # device_ownership_from_security_context = false
	I1206 19:03:16.509006   83344 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1206 19:03:16.509014   83344 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1206 19:03:16.509020   83344 command_runner.go:130] > # hooks_dir = [
	I1206 19:03:16.509025   83344 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1206 19:03:16.509031   83344 command_runner.go:130] > # ]
	I1206 19:03:16.509039   83344 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1206 19:03:16.509048   83344 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1206 19:03:16.509055   83344 command_runner.go:130] > # its default mounts from the following two files:
	I1206 19:03:16.509058   83344 command_runner.go:130] > #
	I1206 19:03:16.509067   83344 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1206 19:03:16.509076   83344 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1206 19:03:16.509082   83344 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1206 19:03:16.509087   83344 command_runner.go:130] > #
	I1206 19:03:16.509093   83344 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1206 19:03:16.509101   83344 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1206 19:03:16.509110   83344 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1206 19:03:16.509117   83344 command_runner.go:130] > #      only add mounts it finds in this file.
	I1206 19:03:16.509120   83344 command_runner.go:130] > #
	I1206 19:03:16.509127   83344 command_runner.go:130] > # default_mounts_file = ""
	I1206 19:03:16.509134   83344 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1206 19:03:16.509141   83344 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1206 19:03:16.509147   83344 command_runner.go:130] > pids_limit = 1024
	I1206 19:03:16.509153   83344 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1206 19:03:16.509161   83344 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1206 19:03:16.509170   83344 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1206 19:03:16.509180   83344 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1206 19:03:16.509186   83344 command_runner.go:130] > # log_size_max = -1
	I1206 19:03:16.509193   83344 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1206 19:03:16.509199   83344 command_runner.go:130] > # log_to_journald = false
	I1206 19:03:16.509205   83344 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1206 19:03:16.509212   83344 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1206 19:03:16.509217   83344 command_runner.go:130] > # Path to directory for container attach sockets.
	I1206 19:03:16.509224   83344 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1206 19:03:16.509240   83344 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1206 19:03:16.509247   83344 command_runner.go:130] > # bind_mount_prefix = ""
	I1206 19:03:16.509253   83344 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1206 19:03:16.509262   83344 command_runner.go:130] > # read_only = false
	I1206 19:03:16.509268   83344 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1206 19:03:16.509277   83344 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1206 19:03:16.509283   83344 command_runner.go:130] > # live configuration reload.
	I1206 19:03:16.509287   83344 command_runner.go:130] > # log_level = "info"
	I1206 19:03:16.509292   83344 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1206 19:03:16.509300   83344 command_runner.go:130] > # This option supports live configuration reload.
	I1206 19:03:16.509304   83344 command_runner.go:130] > # log_filter = ""
	I1206 19:03:16.509309   83344 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1206 19:03:16.509317   83344 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1206 19:03:16.509324   83344 command_runner.go:130] > # separated by comma.
	I1206 19:03:16.509328   83344 command_runner.go:130] > # uid_mappings = ""
	I1206 19:03:16.509336   83344 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1206 19:03:16.509343   83344 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1206 19:03:16.509350   83344 command_runner.go:130] > # separated by comma.
	I1206 19:03:16.509354   83344 command_runner.go:130] > # gid_mappings = ""
	I1206 19:03:16.509362   83344 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1206 19:03:16.509368   83344 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1206 19:03:16.509378   83344 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1206 19:03:16.509384   83344 command_runner.go:130] > # minimum_mappable_uid = -1
	I1206 19:03:16.509390   83344 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1206 19:03:16.509398   83344 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1206 19:03:16.509404   83344 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1206 19:03:16.509411   83344 command_runner.go:130] > # minimum_mappable_gid = -1
	I1206 19:03:16.509417   83344 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1206 19:03:16.509424   83344 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1206 19:03:16.509431   83344 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1206 19:03:16.509437   83344 command_runner.go:130] > # ctr_stop_timeout = 30
	I1206 19:03:16.509443   83344 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1206 19:03:16.509451   83344 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1206 19:03:16.509456   83344 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1206 19:03:16.509463   83344 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1206 19:03:16.509469   83344 command_runner.go:130] > drop_infra_ctr = false
	I1206 19:03:16.509477   83344 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1206 19:03:16.509485   83344 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1206 19:03:16.509494   83344 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1206 19:03:16.509501   83344 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1206 19:03:16.509510   83344 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1206 19:03:16.509515   83344 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1206 19:03:16.509521   83344 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1206 19:03:16.509528   83344 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1206 19:03:16.509535   83344 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1206 19:03:16.509541   83344 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1206 19:03:16.509550   83344 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1206 19:03:16.509558   83344 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1206 19:03:16.509564   83344 command_runner.go:130] > # default_runtime = "runc"
	I1206 19:03:16.509569   83344 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1206 19:03:16.509580   83344 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1206 19:03:16.509596   83344 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1206 19:03:16.509603   83344 command_runner.go:130] > # creation as a file is not desired either.
	I1206 19:03:16.509611   83344 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1206 19:03:16.509618   83344 command_runner.go:130] > # the hostname is being managed dynamically.
	I1206 19:03:16.509632   83344 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1206 19:03:16.509641   83344 command_runner.go:130] > # ]
	I1206 19:03:16.509652   83344 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1206 19:03:16.509660   83344 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1206 19:03:16.509666   83344 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1206 19:03:16.509674   83344 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1206 19:03:16.509678   83344 command_runner.go:130] > #
	I1206 19:03:16.509683   83344 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1206 19:03:16.509690   83344 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1206 19:03:16.509695   83344 command_runner.go:130] > #  runtime_type = "oci"
	I1206 19:03:16.509701   83344 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1206 19:03:16.509709   83344 command_runner.go:130] > #  privileged_without_host_devices = false
	I1206 19:03:16.509713   83344 command_runner.go:130] > #  allowed_annotations = []
	I1206 19:03:16.509719   83344 command_runner.go:130] > # Where:
	I1206 19:03:16.509725   83344 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1206 19:03:16.509736   83344 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1206 19:03:16.509744   83344 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1206 19:03:16.509751   83344 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1206 19:03:16.509757   83344 command_runner.go:130] > #   in $PATH.
	I1206 19:03:16.509763   83344 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1206 19:03:16.509775   83344 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1206 19:03:16.509783   83344 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1206 19:03:16.509793   83344 command_runner.go:130] > #   state.
	I1206 19:03:16.509801   83344 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1206 19:03:16.509809   83344 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1206 19:03:16.509815   83344 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1206 19:03:16.509823   83344 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1206 19:03:16.509830   83344 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1206 19:03:16.509837   83344 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1206 19:03:16.509844   83344 command_runner.go:130] > #   The currently recognized values are:
	I1206 19:03:16.509850   83344 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1206 19:03:16.509859   83344 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1206 19:03:16.509865   83344 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1206 19:03:16.509871   83344 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1206 19:03:16.509879   83344 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1206 19:03:16.509888   83344 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1206 19:03:16.509894   83344 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1206 19:03:16.509902   83344 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1206 19:03:16.509910   83344 command_runner.go:130] > #   should be moved to the container's cgroup
	I1206 19:03:16.509916   83344 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1206 19:03:16.509921   83344 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1206 19:03:16.509927   83344 command_runner.go:130] > runtime_type = "oci"
	I1206 19:03:16.509931   83344 command_runner.go:130] > runtime_root = "/run/runc"
	I1206 19:03:16.509939   83344 command_runner.go:130] > runtime_config_path = ""
	I1206 19:03:16.509943   83344 command_runner.go:130] > monitor_path = ""
	I1206 19:03:16.509949   83344 command_runner.go:130] > monitor_cgroup = ""
	I1206 19:03:16.509954   83344 command_runner.go:130] > monitor_exec_cgroup = ""
	I1206 19:03:16.509962   83344 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1206 19:03:16.509966   83344 command_runner.go:130] > # running containers
	I1206 19:03:16.509973   83344 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1206 19:03:16.509979   83344 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1206 19:03:16.510050   83344 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1206 19:03:16.510064   83344 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1206 19:03:16.510069   83344 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1206 19:03:16.510074   83344 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1206 19:03:16.510078   83344 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1206 19:03:16.510086   83344 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1206 19:03:16.510093   83344 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1206 19:03:16.510098   83344 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1206 19:03:16.510106   83344 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1206 19:03:16.510112   83344 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1206 19:03:16.510120   83344 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1206 19:03:16.510127   83344 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1206 19:03:16.510137   83344 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1206 19:03:16.510145   83344 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1206 19:03:16.510156   83344 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1206 19:03:16.510166   83344 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1206 19:03:16.510172   83344 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1206 19:03:16.510180   83344 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1206 19:03:16.510186   83344 command_runner.go:130] > # Example:
	I1206 19:03:16.510191   83344 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1206 19:03:16.510196   83344 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1206 19:03:16.510203   83344 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1206 19:03:16.510208   83344 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1206 19:03:16.510216   83344 command_runner.go:130] > # cpuset = 0
	I1206 19:03:16.510220   83344 command_runner.go:130] > # cpushares = "0-1"
	I1206 19:03:16.510225   83344 command_runner.go:130] > # Where:
	I1206 19:03:16.510230   83344 command_runner.go:130] > # The workload name is workload-type.
	I1206 19:03:16.510238   83344 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1206 19:03:16.510245   83344 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1206 19:03:16.510250   83344 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1206 19:03:16.510260   83344 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1206 19:03:16.510266   83344 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1206 19:03:16.510272   83344 command_runner.go:130] > # 
	I1206 19:03:16.510278   83344 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1206 19:03:16.510282   83344 command_runner.go:130] > #
	I1206 19:03:16.510287   83344 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1206 19:03:16.510293   83344 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1206 19:03:16.510301   83344 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1206 19:03:16.510307   83344 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1206 19:03:16.510315   83344 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1206 19:03:16.510319   83344 command_runner.go:130] > [crio.image]
	I1206 19:03:16.510329   83344 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1206 19:03:16.510336   83344 command_runner.go:130] > # default_transport = "docker://"
	I1206 19:03:16.510343   83344 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1206 19:03:16.510351   83344 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1206 19:03:16.510356   83344 command_runner.go:130] > # global_auth_file = ""
	I1206 19:03:16.510361   83344 command_runner.go:130] > # The image used to instantiate infra containers.
	I1206 19:03:16.510367   83344 command_runner.go:130] > # This option supports live configuration reload.
	I1206 19:03:16.510371   83344 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1206 19:03:16.510378   83344 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1206 19:03:16.510383   83344 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1206 19:03:16.510388   83344 command_runner.go:130] > # This option supports live configuration reload.
	I1206 19:03:16.510392   83344 command_runner.go:130] > # pause_image_auth_file = ""
	I1206 19:03:16.510401   83344 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1206 19:03:16.510407   83344 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1206 19:03:16.510412   83344 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1206 19:03:16.510418   83344 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1206 19:03:16.510422   83344 command_runner.go:130] > # pause_command = "/pause"
	I1206 19:03:16.510427   83344 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1206 19:03:16.510435   83344 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1206 19:03:16.510441   83344 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1206 19:03:16.510447   83344 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1206 19:03:16.510452   83344 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1206 19:03:16.510455   83344 command_runner.go:130] > # signature_policy = ""
	I1206 19:03:16.510461   83344 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1206 19:03:16.510467   83344 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1206 19:03:16.510470   83344 command_runner.go:130] > # changing them here.
	I1206 19:03:16.510474   83344 command_runner.go:130] > # insecure_registries = [
	I1206 19:03:16.510478   83344 command_runner.go:130] > # ]
	I1206 19:03:16.510486   83344 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1206 19:03:16.510491   83344 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1206 19:03:16.510496   83344 command_runner.go:130] > # image_volumes = "mkdir"
	I1206 19:03:16.510506   83344 command_runner.go:130] > # Temporary directory to use for storing big files
	I1206 19:03:16.510513   83344 command_runner.go:130] > # big_files_temporary_dir = ""
	I1206 19:03:16.510519   83344 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1206 19:03:16.510525   83344 command_runner.go:130] > # CNI plugins.
	I1206 19:03:16.510529   83344 command_runner.go:130] > [crio.network]
	I1206 19:03:16.510542   83344 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1206 19:03:16.510550   83344 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1206 19:03:16.510555   83344 command_runner.go:130] > # cni_default_network = ""
	I1206 19:03:16.510563   83344 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1206 19:03:16.510567   83344 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1206 19:03:16.510575   83344 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1206 19:03:16.510579   83344 command_runner.go:130] > # plugin_dirs = [
	I1206 19:03:16.510586   83344 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1206 19:03:16.510589   83344 command_runner.go:130] > # ]
	I1206 19:03:16.510594   83344 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1206 19:03:16.510600   83344 command_runner.go:130] > [crio.metrics]
	I1206 19:03:16.510605   83344 command_runner.go:130] > # Globally enable or disable metrics support.
	I1206 19:03:16.510612   83344 command_runner.go:130] > enable_metrics = true
	I1206 19:03:16.510616   83344 command_runner.go:130] > # Specify enabled metrics collectors.
	I1206 19:03:16.510624   83344 command_runner.go:130] > # Per default all metrics are enabled.
	I1206 19:03:16.510630   83344 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1206 19:03:16.510636   83344 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1206 19:03:16.510641   83344 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1206 19:03:16.510650   83344 command_runner.go:130] > # metrics_collectors = [
	I1206 19:03:16.510653   83344 command_runner.go:130] > # 	"operations",
	I1206 19:03:16.510658   83344 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1206 19:03:16.510668   83344 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1206 19:03:16.510672   83344 command_runner.go:130] > # 	"operations_errors",
	I1206 19:03:16.510677   83344 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1206 19:03:16.510683   83344 command_runner.go:130] > # 	"image_pulls_by_name",
	I1206 19:03:16.510687   83344 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1206 19:03:16.510694   83344 command_runner.go:130] > # 	"image_pulls_failures",
	I1206 19:03:16.510698   83344 command_runner.go:130] > # 	"image_pulls_successes",
	I1206 19:03:16.510704   83344 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1206 19:03:16.510708   83344 command_runner.go:130] > # 	"image_layer_reuse",
	I1206 19:03:16.510715   83344 command_runner.go:130] > # 	"containers_oom_total",
	I1206 19:03:16.510718   83344 command_runner.go:130] > # 	"containers_oom",
	I1206 19:03:16.510725   83344 command_runner.go:130] > # 	"processes_defunct",
	I1206 19:03:16.510728   83344 command_runner.go:130] > # 	"operations_total",
	I1206 19:03:16.510733   83344 command_runner.go:130] > # 	"operations_latency_seconds",
	I1206 19:03:16.510738   83344 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1206 19:03:16.510744   83344 command_runner.go:130] > # 	"operations_errors_total",
	I1206 19:03:16.510749   83344 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1206 19:03:16.510753   83344 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1206 19:03:16.510757   83344 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1206 19:03:16.510762   83344 command_runner.go:130] > # 	"image_pulls_success_total",
	I1206 19:03:16.510767   83344 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1206 19:03:16.510773   83344 command_runner.go:130] > # 	"containers_oom_count_total",
	I1206 19:03:16.510777   83344 command_runner.go:130] > # ]
	I1206 19:03:16.510784   83344 command_runner.go:130] > # The port on which the metrics server will listen.
	I1206 19:03:16.510788   83344 command_runner.go:130] > # metrics_port = 9090
	I1206 19:03:16.510799   83344 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1206 19:03:16.510803   83344 command_runner.go:130] > # metrics_socket = ""
	I1206 19:03:16.510811   83344 command_runner.go:130] > # The certificate for the secure metrics server.
	I1206 19:03:16.510817   83344 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1206 19:03:16.510825   83344 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1206 19:03:16.510830   83344 command_runner.go:130] > # certificate on any modification event.
	I1206 19:03:16.510836   83344 command_runner.go:130] > # metrics_cert = ""
	I1206 19:03:16.510841   83344 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1206 19:03:16.510848   83344 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1206 19:03:16.510854   83344 command_runner.go:130] > # metrics_key = ""
	I1206 19:03:16.510860   83344 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1206 19:03:16.510866   83344 command_runner.go:130] > [crio.tracing]
	I1206 19:03:16.510871   83344 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1206 19:03:16.510878   83344 command_runner.go:130] > # enable_tracing = false
	I1206 19:03:16.510889   83344 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1206 19:03:16.510897   83344 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1206 19:03:16.510902   83344 command_runner.go:130] > # Number of samples to collect per million spans.
	I1206 19:03:16.510909   83344 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1206 19:03:16.510914   83344 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1206 19:03:16.510920   83344 command_runner.go:130] > [crio.stats]
	I1206 19:03:16.510926   83344 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1206 19:03:16.510931   83344 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1206 19:03:16.510937   83344 command_runner.go:130] > # stats_collection_period = 0
	I1206 19:03:16.511016   83344 cni.go:84] Creating CNI manager for ""
	I1206 19:03:16.511028   83344 cni.go:136] 1 nodes found, recommending kindnet
	I1206 19:03:16.511046   83344 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1206 19:03:16.511077   83344 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.125 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-593099 NodeName:multinode-593099 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.125"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.125 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 19:03:16.511244   83344 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.125
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-593099"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.125
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.125"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 19:03:16.511349   83344 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-593099 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.125
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-593099 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1206 19:03:16.511414   83344 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1206 19:03:16.520418   83344 command_runner.go:130] > kubeadm
	I1206 19:03:16.520439   83344 command_runner.go:130] > kubectl
	I1206 19:03:16.520445   83344 command_runner.go:130] > kubelet
	I1206 19:03:16.520469   83344 binaries.go:44] Found k8s binaries, skipping transfer
	I1206 19:03:16.520538   83344 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 19:03:16.529041   83344 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I1206 19:03:16.545182   83344 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 19:03:16.562776   83344 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I1206 19:03:16.579001   83344 ssh_runner.go:195] Run: grep 192.168.39.125	control-plane.minikube.internal$ /etc/hosts
	I1206 19:03:16.582907   83344 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.125	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 19:03:16.595887   83344 certs.go:56] Setting up /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099 for IP: 192.168.39.125
	I1206 19:03:16.595929   83344 certs.go:190] acquiring lock for shared ca certs: {Name:mkf8fbf7b590617ef4dc6c3a4acb742ae26f89ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 19:03:16.596096   83344 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.key
	I1206 19:03:16.596159   83344 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.key
	I1206 19:03:16.596220   83344 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/client.key
	I1206 19:03:16.596237   83344 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/client.crt with IP's: []
	I1206 19:03:16.695609   83344 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/client.crt ...
	I1206 19:03:16.695643   83344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/client.crt: {Name:mka0ae75344cadeb442f0920f9d873f17411cb26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 19:03:16.695854   83344 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/client.key ...
	I1206 19:03:16.695869   83344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/client.key: {Name:mkf7f338f47fbae8e2f05d9859234dc3c8349719 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 19:03:16.695977   83344 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/apiserver.key.657bd91f
	I1206 19:03:16.695996   83344 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/apiserver.crt.657bd91f with IP's: [192.168.39.125 10.96.0.1 127.0.0.1 10.0.0.1]
	I1206 19:03:16.915817   83344 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/apiserver.crt.657bd91f ...
	I1206 19:03:16.915850   83344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/apiserver.crt.657bd91f: {Name:mk3da96915c3ea0316662e59b5c3b619457be3d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 19:03:16.916038   83344 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/apiserver.key.657bd91f ...
	I1206 19:03:16.916056   83344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/apiserver.key.657bd91f: {Name:mk34fb43ec3dc7aad1b442c7ebe32b4f2f42c77f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 19:03:16.916144   83344 certs.go:337] copying /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/apiserver.crt.657bd91f -> /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/apiserver.crt
	I1206 19:03:16.916245   83344 certs.go:341] copying /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/apiserver.key.657bd91f -> /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/apiserver.key
	I1206 19:03:16.916339   83344 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/proxy-client.key
	I1206 19:03:16.916360   83344 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/proxy-client.crt with IP's: []
	I1206 19:03:16.970223   83344 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/proxy-client.crt ...
	I1206 19:03:16.970256   83344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/proxy-client.crt: {Name:mkf5cbc3d01a4c83c5a78bd033c6d49114a8e1dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 19:03:16.970433   83344 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/proxy-client.key ...
	I1206 19:03:16.970450   83344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/proxy-client.key: {Name:mk2a66f18e7153fc060b2d6108ede1c34f91b0ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 19:03:16.970552   83344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1206 19:03:16.970578   83344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1206 19:03:16.970608   83344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1206 19:03:16.970640   83344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1206 19:03:16.970662   83344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1206 19:03:16.970688   83344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1206 19:03:16.970707   83344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1206 19:03:16.970728   83344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1206 19:03:16.970789   83344 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834.pem (1338 bytes)
	W1206 19:03:16.970841   83344 certs.go:433] ignoring /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834_empty.pem, impossibly tiny 0 bytes
	I1206 19:03:16.970861   83344 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 19:03:16.970908   83344 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem (1082 bytes)
	I1206 19:03:16.970943   83344 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem (1123 bytes)
	I1206 19:03:16.970986   83344 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem (1679 bytes)
	I1206 19:03:16.971041   83344 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem (1708 bytes)
	I1206 19:03:16.971081   83344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:03:16.971100   83344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834.pem -> /usr/share/ca-certificates/70834.pem
	I1206 19:03:16.971119   83344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem -> /usr/share/ca-certificates/708342.pem
	I1206 19:03:16.971730   83344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1206 19:03:16.997426   83344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 19:03:17.020905   83344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 19:03:17.044017   83344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 19:03:17.066821   83344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 19:03:17.089754   83344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 19:03:17.114485   83344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 19:03:17.137803   83344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 19:03:17.160946   83344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 19:03:17.184116   83344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834.pem --> /usr/share/ca-certificates/70834.pem (1338 bytes)
	I1206 19:03:17.206599   83344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem --> /usr/share/ca-certificates/708342.pem (1708 bytes)
	I1206 19:03:17.233349   83344 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 19:03:17.250318   83344 ssh_runner.go:195] Run: openssl version
	I1206 19:03:17.255576   83344 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1206 19:03:17.255728   83344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/70834.pem && ln -fs /usr/share/ca-certificates/70834.pem /etc/ssl/certs/70834.pem"
	I1206 19:03:17.265631   83344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/70834.pem
	I1206 19:03:17.270180   83344 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  6 18:50 /usr/share/ca-certificates/70834.pem
	I1206 19:03:17.270226   83344 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  6 18:50 /usr/share/ca-certificates/70834.pem
	I1206 19:03:17.270283   83344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/70834.pem
	I1206 19:03:17.275566   83344 command_runner.go:130] > 51391683
	I1206 19:03:17.275978   83344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/70834.pem /etc/ssl/certs/51391683.0"
	I1206 19:03:17.285567   83344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/708342.pem && ln -fs /usr/share/ca-certificates/708342.pem /etc/ssl/certs/708342.pem"
	I1206 19:03:17.295176   83344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/708342.pem
	I1206 19:03:17.299565   83344 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  6 18:50 /usr/share/ca-certificates/708342.pem
	I1206 19:03:17.299670   83344 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  6 18:50 /usr/share/ca-certificates/708342.pem
	I1206 19:03:17.299724   83344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/708342.pem
	I1206 19:03:17.305031   83344 command_runner.go:130] > 3ec20f2e
	I1206 19:03:17.305222   83344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/708342.pem /etc/ssl/certs/3ec20f2e.0"
	I1206 19:03:17.314647   83344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1206 19:03:17.324231   83344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:03:17.328494   83344 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  6 18:41 /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:03:17.328599   83344 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  6 18:41 /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:03:17.328646   83344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:03:17.334037   83344 command_runner.go:130] > b5213941
	I1206 19:03:17.334273   83344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1206 19:03:17.343501   83344 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1206 19:03:17.347408   83344 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1206 19:03:17.347440   83344 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1206 19:03:17.347482   83344 kubeadm.go:404] StartCluster: {Name:multinode-593099 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-593099 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 19:03:17.347558   83344 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 19:03:17.347590   83344 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 19:03:17.387638   83344 cri.go:89] found id: ""
	I1206 19:03:17.387703   83344 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 19:03:17.396439   83344 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1206 19:03:17.396463   83344 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1206 19:03:17.396469   83344 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1206 19:03:17.396547   83344 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 19:03:17.405095   83344 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 19:03:17.413323   83344 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1206 19:03:17.413349   83344 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1206 19:03:17.413356   83344 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1206 19:03:17.413371   83344 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 19:03:17.413398   83344 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 19:03:17.413427   83344 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1206 19:03:17.789402   83344 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 19:03:17.789432   83344 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 19:03:30.673965   83344 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1206 19:03:30.673997   83344 command_runner.go:130] > [init] Using Kubernetes version: v1.28.4
	I1206 19:03:30.674056   83344 kubeadm.go:322] [preflight] Running pre-flight checks
	I1206 19:03:30.674093   83344 command_runner.go:130] > [preflight] Running pre-flight checks
	I1206 19:03:30.674176   83344 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 19:03:30.674188   83344 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 19:03:30.674288   83344 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 19:03:30.674300   83344 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 19:03:30.674414   83344 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1206 19:03:30.674426   83344 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1206 19:03:30.674562   83344 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 19:03:30.674607   83344 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 19:03:30.676452   83344 out.go:204]   - Generating certificates and keys ...
	I1206 19:03:30.676546   83344 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1206 19:03:30.676555   83344 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1206 19:03:30.676624   83344 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1206 19:03:30.676634   83344 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1206 19:03:30.676716   83344 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 19:03:30.676737   83344 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 19:03:30.676831   83344 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1206 19:03:30.676840   83344 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1206 19:03:30.676923   83344 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1206 19:03:30.676939   83344 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1206 19:03:30.677003   83344 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1206 19:03:30.677011   83344 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1206 19:03:30.677078   83344 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1206 19:03:30.677089   83344 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1206 19:03:30.677272   83344 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-593099] and IPs [192.168.39.125 127.0.0.1 ::1]
	I1206 19:03:30.677283   83344 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-593099] and IPs [192.168.39.125 127.0.0.1 ::1]
	I1206 19:03:30.677361   83344 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1206 19:03:30.677371   83344 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1206 19:03:30.677537   83344 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-593099] and IPs [192.168.39.125 127.0.0.1 ::1]
	I1206 19:03:30.677547   83344 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-593099] and IPs [192.168.39.125 127.0.0.1 ::1]
	I1206 19:03:30.677641   83344 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 19:03:30.677652   83344 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 19:03:30.677722   83344 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 19:03:30.677730   83344 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 19:03:30.677794   83344 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1206 19:03:30.677804   83344 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1206 19:03:30.677882   83344 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 19:03:30.677896   83344 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 19:03:30.677971   83344 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 19:03:30.677981   83344 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 19:03:30.678060   83344 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 19:03:30.678066   83344 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 19:03:30.678173   83344 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 19:03:30.678185   83344 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 19:03:30.678267   83344 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 19:03:30.678277   83344 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 19:03:30.678401   83344 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 19:03:30.678408   83344 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 19:03:30.678461   83344 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 19:03:30.678470   83344 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 19:03:30.680132   83344 out.go:204]   - Booting up control plane ...
	I1206 19:03:30.680226   83344 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 19:03:30.680235   83344 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 19:03:30.680351   83344 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 19:03:30.680372   83344 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 19:03:30.680454   83344 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 19:03:30.680463   83344 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 19:03:30.680612   83344 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 19:03:30.680623   83344 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 19:03:30.680698   83344 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 19:03:30.680704   83344 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 19:03:30.680759   83344 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1206 19:03:30.680768   83344 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1206 19:03:30.680960   83344 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1206 19:03:30.680972   83344 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1206 19:03:30.681054   83344 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.003373 seconds
	I1206 19:03:30.681067   83344 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.003373 seconds
	I1206 19:03:30.681181   83344 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 19:03:30.681188   83344 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 19:03:30.681358   83344 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 19:03:30.681378   83344 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 19:03:30.681448   83344 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1206 19:03:30.681460   83344 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 19:03:30.681674   83344 command_runner.go:130] > [mark-control-plane] Marking the node multinode-593099 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1206 19:03:30.681685   83344 kubeadm.go:322] [mark-control-plane] Marking the node multinode-593099 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1206 19:03:30.681757   83344 command_runner.go:130] > [bootstrap-token] Using token: x1kqnt.0o9r7s84rky3r894
	I1206 19:03:30.681765   83344 kubeadm.go:322] [bootstrap-token] Using token: x1kqnt.0o9r7s84rky3r894
	I1206 19:03:30.683331   83344 out.go:204]   - Configuring RBAC rules ...
	I1206 19:03:30.683429   83344 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 19:03:30.683442   83344 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 19:03:30.683528   83344 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1206 19:03:30.683538   83344 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1206 19:03:30.683698   83344 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 19:03:30.683713   83344 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 19:03:30.683847   83344 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 19:03:30.683855   83344 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 19:03:30.683978   83344 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 19:03:30.683988   83344 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 19:03:30.684058   83344 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 19:03:30.684064   83344 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 19:03:30.684157   83344 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1206 19:03:30.684163   83344 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1206 19:03:30.684197   83344 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1206 19:03:30.684202   83344 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1206 19:03:30.684245   83344 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1206 19:03:30.684251   83344 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1206 19:03:30.684254   83344 kubeadm.go:322] 
	I1206 19:03:30.684303   83344 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1206 19:03:30.684322   83344 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1206 19:03:30.684345   83344 kubeadm.go:322] 
	I1206 19:03:30.684420   83344 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1206 19:03:30.684428   83344 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1206 19:03:30.684431   83344 kubeadm.go:322] 
	I1206 19:03:30.684478   83344 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1206 19:03:30.684488   83344 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1206 19:03:30.684568   83344 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 19:03:30.684577   83344 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 19:03:30.684647   83344 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 19:03:30.684657   83344 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 19:03:30.684663   83344 kubeadm.go:322] 
	I1206 19:03:30.684736   83344 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1206 19:03:30.684744   83344 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1206 19:03:30.684757   83344 kubeadm.go:322] 
	I1206 19:03:30.684848   83344 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1206 19:03:30.684852   83344 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1206 19:03:30.684858   83344 kubeadm.go:322] 
	I1206 19:03:30.684902   83344 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1206 19:03:30.684907   83344 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1206 19:03:30.684969   83344 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 19:03:30.684974   83344 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 19:03:30.685034   83344 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 19:03:30.685040   83344 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 19:03:30.685043   83344 kubeadm.go:322] 
	I1206 19:03:30.685108   83344 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1206 19:03:30.685117   83344 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1206 19:03:30.685180   83344 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1206 19:03:30.685186   83344 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1206 19:03:30.685189   83344 kubeadm.go:322] 
	I1206 19:03:30.685276   83344 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token x1kqnt.0o9r7s84rky3r894 \
	I1206 19:03:30.685283   83344 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token x1kqnt.0o9r7s84rky3r894 \
	I1206 19:03:30.685372   83344 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:3173d0205a58c67077c3594ee458dc14bc41fcece32682cbfd9ea1126e12b817 \
	I1206 19:03:30.685377   83344 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3173d0205a58c67077c3594ee458dc14bc41fcece32682cbfd9ea1126e12b817 \
	I1206 19:03:30.685393   83344 command_runner.go:130] > 	--control-plane 
	I1206 19:03:30.685399   83344 kubeadm.go:322] 	--control-plane 
	I1206 19:03:30.685409   83344 kubeadm.go:322] 
	I1206 19:03:30.685508   83344 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1206 19:03:30.685520   83344 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1206 19:03:30.685526   83344 kubeadm.go:322] 
	I1206 19:03:30.685617   83344 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token x1kqnt.0o9r7s84rky3r894 \
	I1206 19:03:30.685636   83344 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token x1kqnt.0o9r7s84rky3r894 \
	I1206 19:03:30.685775   83344 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:3173d0205a58c67077c3594ee458dc14bc41fcece32682cbfd9ea1126e12b817 
	I1206 19:03:30.685803   83344 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3173d0205a58c67077c3594ee458dc14bc41fcece32682cbfd9ea1126e12b817 
	I1206 19:03:30.685816   83344 cni.go:84] Creating CNI manager for ""
	I1206 19:03:30.685821   83344 cni.go:136] 1 nodes found, recommending kindnet
	I1206 19:03:30.687575   83344 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1206 19:03:30.689032   83344 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1206 19:03:30.710380   83344 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1206 19:03:30.710411   83344 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1206 19:03:30.710421   83344 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1206 19:03:30.710430   83344 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1206 19:03:30.710440   83344 command_runner.go:130] > Access: 2023-12-06 19:02:59.161576619 +0000
	I1206 19:03:30.710453   83344 command_runner.go:130] > Modify: 2023-12-01 05:15:19.000000000 +0000
	I1206 19:03:30.710461   83344 command_runner.go:130] > Change: 2023-12-06 19:02:57.329576619 +0000
	I1206 19:03:30.710467   83344 command_runner.go:130] >  Birth: -
	I1206 19:03:30.710536   83344 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1206 19:03:30.710550   83344 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1206 19:03:30.770994   83344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1206 19:03:31.870683   83344 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1206 19:03:31.870709   83344 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1206 19:03:31.870721   83344 command_runner.go:130] > serviceaccount/kindnet created
	I1206 19:03:31.870726   83344 command_runner.go:130] > daemonset.apps/kindnet created
	I1206 19:03:31.870747   83344 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.099722258s)
	I1206 19:03:31.870775   83344 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 19:03:31.870897   83344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:03:31.870907   83344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=31a3600ce72029d920a55140bbc6d0705e357503 minikube.k8s.io/name=multinode-593099 minikube.k8s.io/updated_at=2023_12_06T19_03_31_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:03:31.938862   83344 command_runner.go:130] > -16
	I1206 19:03:31.938943   83344 ops.go:34] apiserver oom_adj: -16
	I1206 19:03:32.062239   83344 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1206 19:03:32.062376   83344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:03:32.064252   83344 command_runner.go:130] > node/multinode-593099 labeled
	I1206 19:03:32.166083   83344 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1206 19:03:32.166290   83344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:03:32.253888   83344 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1206 19:03:32.756397   83344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:03:32.845263   83344 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1206 19:03:33.255819   83344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:03:33.339376   83344 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1206 19:03:33.755926   83344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:03:33.851984   83344 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1206 19:03:34.256610   83344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:03:34.339901   83344 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1206 19:03:34.755839   83344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:03:34.842327   83344 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1206 19:03:35.256713   83344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:03:35.344034   83344 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1206 19:03:35.756650   83344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:03:35.852326   83344 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1206 19:03:36.255768   83344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:03:36.343768   83344 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1206 19:03:36.756348   83344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:03:36.851038   83344 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1206 19:03:37.256606   83344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:03:37.343256   83344 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1206 19:03:37.756459   83344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:03:37.852499   83344 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1206 19:03:38.256699   83344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:03:38.341885   83344 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1206 19:03:38.756787   83344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:03:38.848700   83344 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1206 19:03:39.256409   83344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:03:39.346209   83344 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1206 19:03:39.755745   83344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:03:39.852769   83344 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1206 19:03:40.256732   83344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:03:40.335150   83344 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1206 19:03:40.756796   83344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:03:40.849341   83344 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1206 19:03:41.256544   83344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:03:41.342335   83344 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1206 19:03:41.755852   83344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:03:41.842980   83344 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1206 19:03:42.256682   83344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:03:42.394784   83344 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1206 19:03:42.756425   83344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:03:42.853650   83344 command_runner.go:130] > NAME      SECRETS   AGE
	I1206 19:03:42.855033   83344 command_runner.go:130] > default   0         0s
	I1206 19:03:42.856960   83344 kubeadm.go:1088] duration metric: took 10.986129593s to wait for elevateKubeSystemPrivileges.
	I1206 19:03:42.856991   83344 kubeadm.go:406] StartCluster complete in 25.509511457s
	I1206 19:03:42.857016   83344 settings.go:142] acquiring lock: {Name:mkfeb988d43ca5824ac2b3af603600358ae0dd6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 19:03:42.857108   83344 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17740-63652/kubeconfig
	I1206 19:03:42.857915   83344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-63652/kubeconfig: {Name:mkb891a2b2c86b4a1b0f4bb8fd4e51233eb9c683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 19:03:42.858162   83344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 19:03:42.858249   83344 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1206 19:03:42.858340   83344 addons.go:69] Setting storage-provisioner=true in profile "multinode-593099"
	I1206 19:03:42.858348   83344 addons.go:69] Setting default-storageclass=true in profile "multinode-593099"
	I1206 19:03:42.858364   83344 addons.go:231] Setting addon storage-provisioner=true in "multinode-593099"
	I1206 19:03:42.858370   83344 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-593099"
	I1206 19:03:42.858402   83344 config.go:182] Loaded profile config "multinode-593099": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 19:03:42.858427   83344 host.go:66] Checking if "multinode-593099" exists ...
	I1206 19:03:42.858446   83344 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17740-63652/kubeconfig
	I1206 19:03:42.858914   83344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:03:42.858831   83344 kapi.go:59] client config for multinode-593099: &rest.Config{Host:"https://192.168.39.125:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/client.crt", KeyFile:"/home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/client.key", CAFile:"/home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 19:03:42.858961   83344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:03:42.858920   83344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:03:42.859185   83344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:03:42.859676   83344 cert_rotation.go:137] Starting client certificate rotation controller
	I1206 19:03:42.860043   83344 round_trippers.go:463] GET https://192.168.39.125:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1206 19:03:42.860065   83344 round_trippers.go:469] Request Headers:
	I1206 19:03:42.860076   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:03:42.860085   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:03:42.870654   83344 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1206 19:03:42.870678   83344 round_trippers.go:577] Response Headers:
	I1206 19:03:42.870691   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:03:42.870700   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:03:42.870711   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:03:42.870720   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:03:42.870733   83344 round_trippers.go:580]     Content-Length: 291
	I1206 19:03:42.870745   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:03:42 GMT
	I1206 19:03:42.870757   83344 round_trippers.go:580]     Audit-Id: 1302a25a-a05e-4526-9bcc-4751361a2934
	I1206 19:03:42.871072   83344 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"914591c0-c4d9-4bf1-b4d5-c7cbc3153364","resourceVersion":"235","creationTimestamp":"2023-12-06T19:03:30Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1206 19:03:42.871543   83344 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"914591c0-c4d9-4bf1-b4d5-c7cbc3153364","resourceVersion":"235","creationTimestamp":"2023-12-06T19:03:30Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1206 19:03:42.871604   83344 round_trippers.go:463] PUT https://192.168.39.125:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1206 19:03:42.871617   83344 round_trippers.go:469] Request Headers:
	I1206 19:03:42.871626   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:03:42.871633   83344 round_trippers.go:473]     Content-Type: application/json
	I1206 19:03:42.871639   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:03:42.879005   83344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46775
	I1206 19:03:42.879037   83344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41371
	I1206 19:03:42.879434   83344 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:03:42.879518   83344 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:03:42.880035   83344 main.go:141] libmachine: Using API Version  1
	I1206 19:03:42.880055   83344 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:03:42.880177   83344 main.go:141] libmachine: Using API Version  1
	I1206 19:03:42.880210   83344 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:03:42.880352   83344 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:03:42.880550   83344 main.go:141] libmachine: (multinode-593099) Calling .GetState
	I1206 19:03:42.880585   83344 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:03:42.881182   83344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:03:42.881247   83344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:03:42.883025   83344 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17740-63652/kubeconfig
	I1206 19:03:42.883388   83344 kapi.go:59] client config for multinode-593099: &rest.Config{Host:"https://192.168.39.125:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/client.crt", KeyFile:"/home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/client.key", CAFile:"/home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 19:03:42.883735   83344 addons.go:231] Setting addon default-storageclass=true in "multinode-593099"
	I1206 19:03:42.883777   83344 host.go:66] Checking if "multinode-593099" exists ...
	I1206 19:03:42.884216   83344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:03:42.884284   83344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:03:42.887059   83344 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I1206 19:03:42.887081   83344 round_trippers.go:577] Response Headers:
	I1206 19:03:42.887092   83344 round_trippers.go:580]     Audit-Id: d2a05252-cf08-491e-aa3f-410e8def315b
	I1206 19:03:42.887103   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:03:42.887111   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:03:42.887119   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:03:42.887128   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:03:42.887141   83344 round_trippers.go:580]     Content-Length: 291
	I1206 19:03:42.887151   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:03:42 GMT
	I1206 19:03:42.887450   83344 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"914591c0-c4d9-4bf1-b4d5-c7cbc3153364","resourceVersion":"302","creationTimestamp":"2023-12-06T19:03:30Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1206 19:03:42.887674   83344 round_trippers.go:463] GET https://192.168.39.125:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1206 19:03:42.887688   83344 round_trippers.go:469] Request Headers:
	I1206 19:03:42.887701   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:03:42.887715   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:03:42.890986   83344 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1206 19:03:42.891007   83344 round_trippers.go:577] Response Headers:
	I1206 19:03:42.891016   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:03:42.891024   83344 round_trippers.go:580]     Content-Length: 291
	I1206 19:03:42.891032   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:03:42 GMT
	I1206 19:03:42.891040   83344 round_trippers.go:580]     Audit-Id: 261cd863-2d1d-48db-a32f-66b32809846d
	I1206 19:03:42.891053   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:03:42.891064   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:03:42.891071   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:03:42.891097   83344 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"914591c0-c4d9-4bf1-b4d5-c7cbc3153364","resourceVersion":"302","creationTimestamp":"2023-12-06T19:03:30Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1206 19:03:42.891193   83344 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-593099" context rescaled to 1 replicas
	I1206 19:03:42.891233   83344 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 19:03:42.894263   83344 out.go:177] * Verifying Kubernetes components...
	I1206 19:03:42.895896   83344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 19:03:42.897619   83344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41509
	I1206 19:03:42.898039   83344 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:03:42.898568   83344 main.go:141] libmachine: Using API Version  1
	I1206 19:03:42.898597   83344 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:03:42.898970   83344 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:03:42.899151   83344 main.go:141] libmachine: (multinode-593099) Calling .GetState
	I1206 19:03:42.899966   83344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46161
	I1206 19:03:42.900516   83344 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:03:42.901070   83344 main.go:141] libmachine: Using API Version  1
	I1206 19:03:42.901100   83344 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:03:42.901118   83344 main.go:141] libmachine: (multinode-593099) Calling .DriverName
	I1206 19:03:42.903396   83344 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 19:03:42.901479   83344 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:03:42.904948   83344 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 19:03:42.904972   83344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 19:03:42.904993   83344 main.go:141] libmachine: (multinode-593099) Calling .GetSSHHostname
	I1206 19:03:42.905355   83344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:03:42.905408   83344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:03:42.908692   83344 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:03:42.909125   83344 main.go:141] libmachine: (multinode-593099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:c6", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:03:01 +0000 UTC Type:0 Mac:52:54:00:37:16:c6 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:multinode-593099 Clientid:01:52:54:00:37:16:c6}
	I1206 19:03:42.909162   83344 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined IP address 192.168.39.125 and MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:03:42.909299   83344 main.go:141] libmachine: (multinode-593099) Calling .GetSSHPort
	I1206 19:03:42.909532   83344 main.go:141] libmachine: (multinode-593099) Calling .GetSSHKeyPath
	I1206 19:03:42.909747   83344 main.go:141] libmachine: (multinode-593099) Calling .GetSSHUsername
	I1206 19:03:42.909933   83344 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/multinode-593099/id_rsa Username:docker}
	I1206 19:03:42.920965   83344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46625
	I1206 19:03:42.921476   83344 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:03:42.922023   83344 main.go:141] libmachine: Using API Version  1
	I1206 19:03:42.922065   83344 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:03:42.922376   83344 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:03:42.922558   83344 main.go:141] libmachine: (multinode-593099) Calling .GetState
	I1206 19:03:42.924245   83344 main.go:141] libmachine: (multinode-593099) Calling .DriverName
	I1206 19:03:42.924528   83344 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 19:03:42.924544   83344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 19:03:42.924562   83344 main.go:141] libmachine: (multinode-593099) Calling .GetSSHHostname
	I1206 19:03:42.927628   83344 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:03:42.928078   83344 main.go:141] libmachine: (multinode-593099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:c6", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:03:01 +0000 UTC Type:0 Mac:52:54:00:37:16:c6 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:multinode-593099 Clientid:01:52:54:00:37:16:c6}
	I1206 19:03:42.928106   83344 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined IP address 192.168.39.125 and MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:03:42.928311   83344 main.go:141] libmachine: (multinode-593099) Calling .GetSSHPort
	I1206 19:03:42.928507   83344 main.go:141] libmachine: (multinode-593099) Calling .GetSSHKeyPath
	I1206 19:03:42.928685   83344 main.go:141] libmachine: (multinode-593099) Calling .GetSSHUsername
	I1206 19:03:42.928827   83344 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/multinode-593099/id_rsa Username:docker}
	I1206 19:03:43.038636   83344 command_runner.go:130] > apiVersion: v1
	I1206 19:03:43.038661   83344 command_runner.go:130] > data:
	I1206 19:03:43.038665   83344 command_runner.go:130] >   Corefile: |
	I1206 19:03:43.038669   83344 command_runner.go:130] >     .:53 {
	I1206 19:03:43.038673   83344 command_runner.go:130] >         errors
	I1206 19:03:43.038678   83344 command_runner.go:130] >         health {
	I1206 19:03:43.038682   83344 command_runner.go:130] >            lameduck 5s
	I1206 19:03:43.038686   83344 command_runner.go:130] >         }
	I1206 19:03:43.038693   83344 command_runner.go:130] >         ready
	I1206 19:03:43.038699   83344 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1206 19:03:43.038703   83344 command_runner.go:130] >            pods insecure
	I1206 19:03:43.038708   83344 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1206 19:03:43.038713   83344 command_runner.go:130] >            ttl 30
	I1206 19:03:43.038716   83344 command_runner.go:130] >         }
	I1206 19:03:43.038720   83344 command_runner.go:130] >         prometheus :9153
	I1206 19:03:43.038725   83344 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1206 19:03:43.038730   83344 command_runner.go:130] >            max_concurrent 1000
	I1206 19:03:43.038740   83344 command_runner.go:130] >         }
	I1206 19:03:43.038746   83344 command_runner.go:130] >         cache 30
	I1206 19:03:43.038752   83344 command_runner.go:130] >         loop
	I1206 19:03:43.038759   83344 command_runner.go:130] >         reload
	I1206 19:03:43.038766   83344 command_runner.go:130] >         loadbalance
	I1206 19:03:43.038773   83344 command_runner.go:130] >     }
	I1206 19:03:43.038779   83344 command_runner.go:130] > kind: ConfigMap
	I1206 19:03:43.038787   83344 command_runner.go:130] > metadata:
	I1206 19:03:43.038799   83344 command_runner.go:130] >   creationTimestamp: "2023-12-06T19:03:30Z"
	I1206 19:03:43.038806   83344 command_runner.go:130] >   name: coredns
	I1206 19:03:43.038811   83344 command_runner.go:130] >   namespace: kube-system
	I1206 19:03:43.038818   83344 command_runner.go:130] >   resourceVersion: "231"
	I1206 19:03:43.038823   83344 command_runner.go:130] >   uid: b66768a8-338a-4581-9dee-65cb570c9e23
	I1206 19:03:43.040360   83344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 19:03:43.040562   83344 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17740-63652/kubeconfig
	I1206 19:03:43.040978   83344 kapi.go:59] client config for multinode-593099: &rest.Config{Host:"https://192.168.39.125:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/client.crt", KeyFile:"/home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/client.key", CAFile:"/home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 19:03:43.041375   83344 node_ready.go:35] waiting up to 6m0s for node "multinode-593099" to be "Ready" ...
	I1206 19:03:43.041470   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:03:43.041482   83344 round_trippers.go:469] Request Headers:
	I1206 19:03:43.041492   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:03:43.041499   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:03:43.043362   83344 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1206 19:03:43.043381   83344 round_trippers.go:577] Response Headers:
	I1206 19:03:43.043391   83344 round_trippers.go:580]     Audit-Id: 2a19ffa5-6622-491e-a78a-58d9d0b87a4f
	I1206 19:03:43.043400   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:03:43.043413   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:03:43.043421   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:03:43.043426   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:03:43.043431   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:03:43 GMT
	I1206 19:03:43.043771   83344 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"295","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:0
3:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations [truncated 5989 chars]
	I1206 19:03:43.044588   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:03:43.044606   83344 round_trippers.go:469] Request Headers:
	I1206 19:03:43.044617   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:03:43.044626   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:03:43.050192   83344 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1206 19:03:43.050213   83344 round_trippers.go:577] Response Headers:
	I1206 19:03:43.050220   83344 round_trippers.go:580]     Audit-Id: b32bbdd0-4437-4732-9669-736c9f13b682
	I1206 19:03:43.050225   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:03:43.050231   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:03:43.050236   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:03:43.050241   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:03:43.050250   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:03:43 GMT
	I1206 19:03:43.050370   83344 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"295","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:0
3:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations [truncated 5989 chars]
	I1206 19:03:43.085517   83344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 19:03:43.140702   83344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 19:03:43.551197   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:03:43.551221   83344 round_trippers.go:469] Request Headers:
	I1206 19:03:43.551229   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:03:43.551235   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:03:43.624678   83344 round_trippers.go:574] Response Status: 200 OK in 73 milliseconds
	I1206 19:03:43.624702   83344 round_trippers.go:577] Response Headers:
	I1206 19:03:43.624709   83344 round_trippers.go:580]     Audit-Id: 60b7c227-431f-4233-81c6-1da90cf0a076
	I1206 19:03:43.624715   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:03:43.624720   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:03:43.624727   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:03:43.624735   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:03:43.624742   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:03:43 GMT
	I1206 19:03:43.640465   83344 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"323","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1206 19:03:43.882998   83344 command_runner.go:130] > configmap/coredns replaced
	I1206 19:03:43.885672   83344 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1206 19:03:44.051842   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:03:44.051864   83344 round_trippers.go:469] Request Headers:
	I1206 19:03:44.051873   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:03:44.051879   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:03:44.054848   83344 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:03:44.054867   83344 round_trippers.go:577] Response Headers:
	I1206 19:03:44.054874   83344 round_trippers.go:580]     Audit-Id: a5c9544c-154c-4658-8dc1-2cc2048f4756
	I1206 19:03:44.054880   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:03:44.054885   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:03:44.054890   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:03:44.054897   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:03:44.054904   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:03:44 GMT
	I1206 19:03:44.055191   83344 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"323","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1206 19:03:44.165258   83344 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1206 19:03:44.171809   83344 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1206 19:03:44.184514   83344 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1206 19:03:44.201020   83344 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1206 19:03:44.211183   83344 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1206 19:03:44.236352   83344 command_runner.go:130] > pod/storage-provisioner created
	I1206 19:03:44.238824   83344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.153266701s)
	I1206 19:03:44.238878   83344 main.go:141] libmachine: Making call to close driver server
	I1206 19:03:44.238887   83344 main.go:141] libmachine: (multinode-593099) Calling .Close
	I1206 19:03:44.238893   83344 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1206 19:03:44.238936   83344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.098206009s)
	I1206 19:03:44.238971   83344 main.go:141] libmachine: Making call to close driver server
	I1206 19:03:44.238986   83344 main.go:141] libmachine: (multinode-593099) Calling .Close
	I1206 19:03:44.239238   83344 main.go:141] libmachine: Successfully made call to close driver server
	I1206 19:03:44.239259   83344 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 19:03:44.239269   83344 main.go:141] libmachine: Making call to close driver server
	I1206 19:03:44.239277   83344 main.go:141] libmachine: (multinode-593099) Calling .Close
	I1206 19:03:44.239297   83344 main.go:141] libmachine: (multinode-593099) DBG | Closing plugin on server side
	I1206 19:03:44.239305   83344 main.go:141] libmachine: Successfully made call to close driver server
	I1206 19:03:44.239318   83344 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 19:03:44.239328   83344 main.go:141] libmachine: Making call to close driver server
	I1206 19:03:44.239337   83344 main.go:141] libmachine: (multinode-593099) Calling .Close
	I1206 19:03:44.239544   83344 main.go:141] libmachine: Successfully made call to close driver server
	I1206 19:03:44.239556   83344 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 19:03:44.239659   83344 round_trippers.go:463] GET https://192.168.39.125:8443/apis/storage.k8s.io/v1/storageclasses
	I1206 19:03:44.239666   83344 round_trippers.go:469] Request Headers:
	I1206 19:03:44.239680   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:03:44.239689   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:03:44.240934   83344 main.go:141] libmachine: Successfully made call to close driver server
	I1206 19:03:44.240946   83344 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 19:03:44.252843   83344 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I1206 19:03:44.252864   83344 round_trippers.go:577] Response Headers:
	I1206 19:03:44.252871   83344 round_trippers.go:580]     Content-Length: 1273
	I1206 19:03:44.252877   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:03:44 GMT
	I1206 19:03:44.252882   83344 round_trippers.go:580]     Audit-Id: 627afe53-576d-4dd3-bc5b-e91c7115ad9d
	I1206 19:03:44.252887   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:03:44.252899   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:03:44.252904   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:03:44.252909   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:03:44.252958   83344 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"363"},"items":[{"metadata":{"name":"standard","uid":"9347938a-c72b-4c76-b239-bb70d5072600","resourceVersion":"350","creationTimestamp":"2023-12-06T19:03:43Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-06T19:03:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1206 19:03:44.253673   83344 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"9347938a-c72b-4c76-b239-bb70d5072600","resourceVersion":"350","creationTimestamp":"2023-12-06T19:03:43Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-06T19:03:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1206 19:03:44.253761   83344 round_trippers.go:463] PUT https://192.168.39.125:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1206 19:03:44.253776   83344 round_trippers.go:469] Request Headers:
	I1206 19:03:44.253796   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:03:44.253858   83344 round_trippers.go:473]     Content-Type: application/json
	I1206 19:03:44.253938   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:03:44.260542   83344 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1206 19:03:44.260574   83344 round_trippers.go:577] Response Headers:
	I1206 19:03:44.260585   83344 round_trippers.go:580]     Audit-Id: e6c68b13-2346-4899-a30a-fa92c2d930c5
	I1206 19:03:44.260594   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:03:44.260603   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:03:44.260614   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:03:44.260621   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:03:44.260630   83344 round_trippers.go:580]     Content-Length: 1220
	I1206 19:03:44.260636   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:03:44 GMT
	I1206 19:03:44.260666   83344 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"9347938a-c72b-4c76-b239-bb70d5072600","resourceVersion":"350","creationTimestamp":"2023-12-06T19:03:43Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-06T19:03:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1206 19:03:44.260799   83344 main.go:141] libmachine: Making call to close driver server
	I1206 19:03:44.260812   83344 main.go:141] libmachine: (multinode-593099) Calling .Close
	I1206 19:03:44.261101   83344 main.go:141] libmachine: Successfully made call to close driver server
	I1206 19:03:44.261120   83344 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 19:03:44.263284   83344 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1206 19:03:44.264687   83344 addons.go:502] enable addons completed in 1.406447676s: enabled=[storage-provisioner default-storageclass]
	I1206 19:03:44.551861   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:03:44.551884   83344 round_trippers.go:469] Request Headers:
	I1206 19:03:44.551892   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:03:44.551898   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:03:44.555838   83344 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1206 19:03:44.555859   83344 round_trippers.go:577] Response Headers:
	I1206 19:03:44.555866   83344 round_trippers.go:580]     Audit-Id: fc19c66c-89ba-4861-bb15-de931687c47e
	I1206 19:03:44.555872   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:03:44.555877   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:03:44.555886   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:03:44.555891   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:03:44.555896   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:03:44 GMT
	I1206 19:03:44.556047   83344 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"323","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1206 19:03:45.051648   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:03:45.051675   83344 round_trippers.go:469] Request Headers:
	I1206 19:03:45.051685   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:03:45.051691   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:03:45.054338   83344 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:03:45.054360   83344 round_trippers.go:577] Response Headers:
	I1206 19:03:45.054367   83344 round_trippers.go:580]     Audit-Id: dc2fd31a-b5d0-4748-8c1c-77b4e666300c
	I1206 19:03:45.054372   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:03:45.054378   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:03:45.054382   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:03:45.054389   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:03:45.054394   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:03:45 GMT
	I1206 19:03:45.054718   83344 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"323","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1206 19:03:45.055028   83344 node_ready.go:58] node "multinode-593099" has status "Ready":"False"
	I1206 19:03:45.551425   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:03:45.551453   83344 round_trippers.go:469] Request Headers:
	I1206 19:03:45.551462   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:03:45.551469   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:03:45.554411   83344 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:03:45.554428   83344 round_trippers.go:577] Response Headers:
	I1206 19:03:45.554442   83344 round_trippers.go:580]     Audit-Id: d715b5db-41ec-4999-819a-24726c6960d6
	I1206 19:03:45.554450   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:03:45.554460   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:03:45.554472   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:03:45.554479   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:03:45.554491   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:03:45 GMT
	I1206 19:03:45.554794   83344 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"323","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1206 19:03:46.051746   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:03:46.051771   83344 round_trippers.go:469] Request Headers:
	I1206 19:03:46.051783   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:03:46.051791   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:03:46.054753   83344 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:03:46.054780   83344 round_trippers.go:577] Response Headers:
	I1206 19:03:46.054789   83344 round_trippers.go:580]     Audit-Id: cc21c952-41ef-497a-aa33-c4dfb2e1d530
	I1206 19:03:46.054796   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:03:46.054803   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:03:46.054811   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:03:46.054819   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:03:46.054831   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:03:46 GMT
	I1206 19:03:46.055045   83344 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"323","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1206 19:03:46.551588   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:03:46.551617   83344 round_trippers.go:469] Request Headers:
	I1206 19:03:46.551625   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:03:46.551631   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:03:46.555211   83344 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1206 19:03:46.555242   83344 round_trippers.go:577] Response Headers:
	I1206 19:03:46.555258   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:03:46 GMT
	I1206 19:03:46.555267   83344 round_trippers.go:580]     Audit-Id: 5a1be257-2c47-404d-8369-014e292f8c85
	I1206 19:03:46.555275   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:03:46.555284   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:03:46.555291   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:03:46.555299   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:03:46.555584   83344 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"323","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1206 19:03:47.051261   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:03:47.051287   83344 round_trippers.go:469] Request Headers:
	I1206 19:03:47.051295   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:03:47.051301   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:03:47.054003   83344 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:03:47.054031   83344 round_trippers.go:577] Response Headers:
	I1206 19:03:47.054042   83344 round_trippers.go:580]     Audit-Id: 1fc23b40-64a7-45ca-8e89-3f3092b6b8fb
	I1206 19:03:47.054051   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:03:47.054059   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:03:47.054067   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:03:47.054075   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:03:47.054087   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:03:47 GMT
	I1206 19:03:47.055020   83344 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"323","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1206 19:03:47.055458   83344 node_ready.go:58] node "multinode-593099" has status "Ready":"False"
	I1206 19:03:47.551710   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:03:47.551735   83344 round_trippers.go:469] Request Headers:
	I1206 19:03:47.551746   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:03:47.551754   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:03:47.554589   83344 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:03:47.554616   83344 round_trippers.go:577] Response Headers:
	I1206 19:03:47.554625   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:03:47.554633   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:03:47 GMT
	I1206 19:03:47.554640   83344 round_trippers.go:580]     Audit-Id: 58e6c79d-d08e-49f6-bd58-7c856e33b856
	I1206 19:03:47.554651   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:03:47.554665   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:03:47.554673   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:03:47.554898   83344 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"323","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1206 19:03:48.051216   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:03:48.051268   83344 round_trippers.go:469] Request Headers:
	I1206 19:03:48.051280   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:03:48.051291   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:03:48.054928   83344 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1206 19:03:48.054953   83344 round_trippers.go:577] Response Headers:
	I1206 19:03:48.054963   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:03:48.054975   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:03:48.054987   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:03:48.054999   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:03:48 GMT
	I1206 19:03:48.055010   83344 round_trippers.go:580]     Audit-Id: e3ee5ec0-e52f-41f4-963e-8030cc05f81d
	I1206 19:03:48.055019   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:03:48.055152   83344 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"323","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1206 19:03:48.551856   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:03:48.551887   83344 round_trippers.go:469] Request Headers:
	I1206 19:03:48.551900   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:03:48.551911   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:03:48.554877   83344 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:03:48.554897   83344 round_trippers.go:577] Response Headers:
	I1206 19:03:48.554904   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:03:48 GMT
	I1206 19:03:48.554910   83344 round_trippers.go:580]     Audit-Id: 57e26b95-e935-483b-9aab-3fcb4c9b0aa8
	I1206 19:03:48.554915   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:03:48.554920   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:03:48.554925   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:03:48.554931   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:03:48.555067   83344 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"323","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1206 19:03:49.051836   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:03:49.051864   83344 round_trippers.go:469] Request Headers:
	I1206 19:03:49.051873   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:03:49.051879   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:03:49.055922   83344 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1206 19:03:49.055949   83344 round_trippers.go:577] Response Headers:
	I1206 19:03:49.055958   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:03:49 GMT
	I1206 19:03:49.055964   83344 round_trippers.go:580]     Audit-Id: 82b3d471-3b42-42a7-9fb3-b449ad6fa5eb
	I1206 19:03:49.055969   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:03:49.055974   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:03:49.055979   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:03:49.055984   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:03:49.056714   83344 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"378","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1206 19:03:49.057021   83344 node_ready.go:49] node "multinode-593099" has status "Ready":"True"
	I1206 19:03:49.057037   83344 node_ready.go:38] duration metric: took 6.01563887s waiting for node "multinode-593099" to be "Ready" ...
	I1206 19:03:49.057047   83344 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 19:03:49.057120   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods
	I1206 19:03:49.057128   83344 round_trippers.go:469] Request Headers:
	I1206 19:03:49.057135   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:03:49.057141   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:03:49.060503   83344 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1206 19:03:49.060522   83344 round_trippers.go:577] Response Headers:
	I1206 19:03:49.060530   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:03:49.060538   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:03:49.060547   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:03:49.060554   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:03:49.060565   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:03:49 GMT
	I1206 19:03:49.060573   83344 round_trippers.go:580]     Audit-Id: 839f282e-b3f4-4e4a-a660-b9e928a9e1b9
	I1206 19:03:49.061731   83344 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"384"},"items":[{"metadata":{"name":"coredns-5dd5756b68-h6rcq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"85247dde-4cee-482e-8f9b-a9e8f4e7172e","resourceVersion":"382","creationTimestamp":"2023-12-06T19:03:43Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d4bc00ef-7482-4e80-b416-7475ddc04c5d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4bc00ef-7482-4e80-b416-7475ddc04c5d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53918 chars]
	I1206 19:03:49.064724   83344 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-h6rcq" in "kube-system" namespace to be "Ready" ...
	I1206 19:03:49.064799   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h6rcq
	I1206 19:03:49.064809   83344 round_trippers.go:469] Request Headers:
	I1206 19:03:49.064816   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:03:49.064822   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:03:49.066837   83344 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1206 19:03:49.066855   83344 round_trippers.go:577] Response Headers:
	I1206 19:03:49.066864   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:03:49.066871   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:03:49.066878   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:03:49.066887   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:03:49 GMT
	I1206 19:03:49.066894   83344 round_trippers.go:580]     Audit-Id: 509f4c5c-6f78-42c8-b654-77be292b0de8
	I1206 19:03:49.066903   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:03:49.067131   83344 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-h6rcq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"85247dde-4cee-482e-8f9b-a9e8f4e7172e","resourceVersion":"382","creationTimestamp":"2023-12-06T19:03:43Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d4bc00ef-7482-4e80-b416-7475ddc04c5d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4bc00ef-7482-4e80-b416-7475ddc04c5d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1206 19:03:49.067671   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:03:49.067689   83344 round_trippers.go:469] Request Headers:
	I1206 19:03:49.067700   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:03:49.067709   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:03:49.069902   83344 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:03:49.069915   83344 round_trippers.go:577] Response Headers:
	I1206 19:03:49.069921   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:03:49.069926   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:03:49.069937   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:03:49.069942   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:03:49.069948   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:03:49 GMT
	I1206 19:03:49.069953   83344 round_trippers.go:580]     Audit-Id: c9efccfe-421f-4aab-b115-094d52c2e719
	I1206 19:03:49.070185   83344 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"378","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1206 19:03:49.070488   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h6rcq
	I1206 19:03:49.070499   83344 round_trippers.go:469] Request Headers:
	I1206 19:03:49.070510   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:03:49.070515   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:03:49.072667   83344 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:03:49.072686   83344 round_trippers.go:577] Response Headers:
	I1206 19:03:49.072695   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:03:49 GMT
	I1206 19:03:49.072704   83344 round_trippers.go:580]     Audit-Id: 255f6de1-a787-440a-a4c0-9fd370a59636
	I1206 19:03:49.072713   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:03:49.072723   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:03:49.072731   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:03:49.072741   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:03:49.072874   83344 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-h6rcq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"85247dde-4cee-482e-8f9b-a9e8f4e7172e","resourceVersion":"382","creationTimestamp":"2023-12-06T19:03:43Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d4bc00ef-7482-4e80-b416-7475ddc04c5d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4bc00ef-7482-4e80-b416-7475ddc04c5d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1206 19:03:49.073247   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:03:49.073262   83344 round_trippers.go:469] Request Headers:
	I1206 19:03:49.073272   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:03:49.073278   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:03:49.075332   83344 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:03:49.075349   83344 round_trippers.go:577] Response Headers:
	I1206 19:03:49.075359   83344 round_trippers.go:580]     Audit-Id: aa1a17bf-c061-4aff-9cdd-3b0bcdd65aea
	I1206 19:03:49.075367   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:03:49.075376   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:03:49.075385   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:03:49.075393   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:03:49.075400   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:03:49 GMT
	I1206 19:03:49.075655   83344 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"378","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1206 19:03:49.576530   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h6rcq
	I1206 19:03:49.576564   83344 round_trippers.go:469] Request Headers:
	I1206 19:03:49.576576   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:03:49.576584   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:03:49.582259   83344 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1206 19:03:49.582287   83344 round_trippers.go:577] Response Headers:
	I1206 19:03:49.582295   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:03:49.582301   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:03:49 GMT
	I1206 19:03:49.582306   83344 round_trippers.go:580]     Audit-Id: aad0d52c-7822-4abd-8a74-ef3fc57e3dcb
	I1206 19:03:49.582311   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:03:49.582316   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:03:49.582321   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:03:49.583095   83344 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-h6rcq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"85247dde-4cee-482e-8f9b-a9e8f4e7172e","resourceVersion":"382","creationTimestamp":"2023-12-06T19:03:43Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d4bc00ef-7482-4e80-b416-7475ddc04c5d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4bc00ef-7482-4e80-b416-7475ddc04c5d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1206 19:03:49.583555   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:03:49.583567   83344 round_trippers.go:469] Request Headers:
	I1206 19:03:49.583574   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:03:49.583581   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:03:49.588368   83344 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1206 19:03:49.588392   83344 round_trippers.go:577] Response Headers:
	I1206 19:03:49.588402   83344 round_trippers.go:580]     Audit-Id: f83ba37a-6e8a-4758-8012-74e0814555db
	I1206 19:03:49.588410   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:03:49.588419   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:03:49.588427   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:03:49.588482   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:03:49.588522   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:03:49 GMT
	I1206 19:03:49.588962   83344 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"378","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1206 19:03:50.076719   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h6rcq
	I1206 19:03:50.076749   83344 round_trippers.go:469] Request Headers:
	I1206 19:03:50.076762   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:03:50.076772   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:03:50.079777   83344 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:03:50.079798   83344 round_trippers.go:577] Response Headers:
	I1206 19:03:50.079804   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:03:50.079810   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:03:50 GMT
	I1206 19:03:50.079815   83344 round_trippers.go:580]     Audit-Id: 0a0ec99e-4dde-485a-b260-e5721a71669a
	I1206 19:03:50.079820   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:03:50.079825   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:03:50.079830   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:03:50.080273   83344 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-h6rcq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"85247dde-4cee-482e-8f9b-a9e8f4e7172e","resourceVersion":"396","creationTimestamp":"2023-12-06T19:03:43Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d4bc00ef-7482-4e80-b416-7475ddc04c5d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4bc00ef-7482-4e80-b416-7475ddc04c5d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6494 chars]
	I1206 19:03:50.080748   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:03:50.080764   83344 round_trippers.go:469] Request Headers:
	I1206 19:03:50.080771   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:03:50.080777   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:03:50.083248   83344 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:03:50.083274   83344 round_trippers.go:577] Response Headers:
	I1206 19:03:50.083281   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:03:50 GMT
	I1206 19:03:50.083288   83344 round_trippers.go:580]     Audit-Id: 5acaaf42-5000-4dc1-b614-9ca127c216e9
	I1206 19:03:50.083294   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:03:50.083304   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:03:50.083309   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:03:50.083317   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:03:50.083586   83344 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"378","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1206 19:03:50.576320   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h6rcq
	I1206 19:03:50.576348   83344 round_trippers.go:469] Request Headers:
	I1206 19:03:50.576356   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:03:50.576363   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:03:50.579227   83344 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:03:50.579250   83344 round_trippers.go:577] Response Headers:
	I1206 19:03:50.579260   83344 round_trippers.go:580]     Audit-Id: 15e65ab2-7591-48e2-b497-f0fbecfa62b2
	I1206 19:03:50.579268   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:03:50.579276   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:03:50.579283   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:03:50.579290   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:03:50.579299   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:03:50 GMT
	I1206 19:03:50.579860   83344 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-h6rcq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"85247dde-4cee-482e-8f9b-a9e8f4e7172e","resourceVersion":"396","creationTimestamp":"2023-12-06T19:03:43Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d4bc00ef-7482-4e80-b416-7475ddc04c5d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4bc00ef-7482-4e80-b416-7475ddc04c5d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6494 chars]
	I1206 19:03:50.580333   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:03:50.580347   83344 round_trippers.go:469] Request Headers:
	I1206 19:03:50.580355   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:03:50.580361   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:03:50.582646   83344 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:03:50.582666   83344 round_trippers.go:577] Response Headers:
	I1206 19:03:50.582673   83344 round_trippers.go:580]     Audit-Id: 6ba4a862-008a-4d07-806f-03480862824e
	I1206 19:03:50.582679   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:03:50.582686   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:03:50.582691   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:03:50.582696   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:03:50.582701   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:03:50 GMT
	I1206 19:03:50.582816   83344 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"378","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1206 19:03:51.076639   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h6rcq
	I1206 19:03:51.076664   83344 round_trippers.go:469] Request Headers:
	I1206 19:03:51.076684   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:03:51.076691   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:03:51.079902   83344 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1206 19:03:51.079928   83344 round_trippers.go:577] Response Headers:
	I1206 19:03:51.079937   83344 round_trippers.go:580]     Audit-Id: 25054331-4254-45dd-ab45-da93d7eef622
	I1206 19:03:51.079945   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:03:51.079952   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:03:51.079959   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:03:51.079967   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:03:51.079974   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:03:51 GMT
	I1206 19:03:51.080463   83344 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-h6rcq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"85247dde-4cee-482e-8f9b-a9e8f4e7172e","resourceVersion":"399","creationTimestamp":"2023-12-06T19:03:43Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d4bc00ef-7482-4e80-b416-7475ddc04c5d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4bc00ef-7482-4e80-b416-7475ddc04c5d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I1206 19:03:51.081003   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:03:51.081017   83344 round_trippers.go:469] Request Headers:
	I1206 19:03:51.081024   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:03:51.081030   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:03:51.083708   83344 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:03:51.083728   83344 round_trippers.go:577] Response Headers:
	I1206 19:03:51.083738   83344 round_trippers.go:580]     Audit-Id: f21bc0c7-84fc-4422-b86f-259c4be8c5d6
	I1206 19:03:51.083746   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:03:51.083756   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:03:51.083763   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:03:51.083772   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:03:51.083782   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:03:51 GMT
	I1206 19:03:51.084635   83344 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"378","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1206 19:03:51.084945   83344 pod_ready.go:92] pod "coredns-5dd5756b68-h6rcq" in "kube-system" namespace has status "Ready":"True"
	I1206 19:03:51.084960   83344 pod_ready.go:81] duration metric: took 2.020216048s waiting for pod "coredns-5dd5756b68-h6rcq" in "kube-system" namespace to be "Ready" ...
	I1206 19:03:51.084970   83344 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-593099" in "kube-system" namespace to be "Ready" ...
	I1206 19:03:51.085019   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-593099
	I1206 19:03:51.085026   83344 round_trippers.go:469] Request Headers:
	I1206 19:03:51.085033   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:03:51.085038   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:03:51.087418   83344 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:03:51.087440   83344 round_trippers.go:577] Response Headers:
	I1206 19:03:51.087450   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:03:51.087459   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:03:51 GMT
	I1206 19:03:51.087474   83344 round_trippers.go:580]     Audit-Id: bae1a8bb-2704-4239-a554-1a353f9009a8
	I1206 19:03:51.087482   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:03:51.087491   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:03:51.087499   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:03:51.087635   83344 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-593099","namespace":"kube-system","uid":"17573829-76f1-4718-80d6-248db178e8d0","resourceVersion":"275","creationTimestamp":"2023-12-06T19:03:29Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.125:2379","kubernetes.io/config.hash":"9ce14df981100c86a2ade94d91a33196","kubernetes.io/config.mirror":"9ce14df981100c86a2ade94d91a33196","kubernetes.io/config.seen":"2023-12-06T19:03:21.456077539Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I1206 19:03:51.088137   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:03:51.088158   83344 round_trippers.go:469] Request Headers:
	I1206 19:03:51.088170   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:03:51.088178   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:03:51.091781   83344 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1206 19:03:51.091794   83344 round_trippers.go:577] Response Headers:
	I1206 19:03:51.091800   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:03:51.091806   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:03:51.091812   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:03:51 GMT
	I1206 19:03:51.091818   83344 round_trippers.go:580]     Audit-Id: 1d5c4390-9a7c-41f3-8d91-156233878126
	I1206 19:03:51.091824   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:03:51.091829   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:03:51.092396   83344 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"378","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1206 19:03:51.092659   83344 pod_ready.go:92] pod "etcd-multinode-593099" in "kube-system" namespace has status "Ready":"True"
	I1206 19:03:51.092671   83344 pod_ready.go:81] duration metric: took 7.69661ms waiting for pod "etcd-multinode-593099" in "kube-system" namespace to be "Ready" ...
	I1206 19:03:51.092682   83344 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-593099" in "kube-system" namespace to be "Ready" ...
	I1206 19:03:51.092724   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-593099
	I1206 19:03:51.092731   83344 round_trippers.go:469] Request Headers:
	I1206 19:03:51.092738   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:03:51.092743   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:03:51.097059   83344 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1206 19:03:51.097080   83344 round_trippers.go:577] Response Headers:
	I1206 19:03:51.097089   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:03:51.097098   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:03:51.097107   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:03:51 GMT
	I1206 19:03:51.097116   83344 round_trippers.go:580]     Audit-Id: 7fe8803c-7a5b-4c09-894c-da68495596cc
	I1206 19:03:51.097125   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:03:51.097138   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:03:51.098181   83344 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-593099","namespace":"kube-system","uid":"c32eea84-5395-4ffd-9fe4-51ae29b0861c","resourceVersion":"277","creationTimestamp":"2023-12-06T19:03:31Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.125:8443","kubernetes.io/config.hash":"6290493e5e32b3d1986ab88f381ba97f","kubernetes.io/config.mirror":"6290493e5e32b3d1986ab88f381ba97f","kubernetes.io/config.seen":"2023-12-06T19:03:30.652197401Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I1206 19:03:51.098644   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:03:51.098660   83344 round_trippers.go:469] Request Headers:
	I1206 19:03:51.098667   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:03:51.098673   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:03:51.100970   83344 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:03:51.100984   83344 round_trippers.go:577] Response Headers:
	I1206 19:03:51.100993   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:03:51.101001   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:03:51.101010   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:03:51.101019   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:03:51 GMT
	I1206 19:03:51.101030   83344 round_trippers.go:580]     Audit-Id: eee8cc8d-1018-4b18-a9a5-2e202fb8a044
	I1206 19:03:51.101044   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:03:51.101219   83344 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"378","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1206 19:03:51.101633   83344 pod_ready.go:92] pod "kube-apiserver-multinode-593099" in "kube-system" namespace has status "Ready":"True"
	I1206 19:03:51.101655   83344 pod_ready.go:81] duration metric: took 8.966931ms waiting for pod "kube-apiserver-multinode-593099" in "kube-system" namespace to be "Ready" ...
	I1206 19:03:51.101668   83344 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-593099" in "kube-system" namespace to be "Ready" ...
	I1206 19:03:51.101799   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-593099
	I1206 19:03:51.101804   83344 round_trippers.go:469] Request Headers:
	I1206 19:03:51.101810   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:03:51.101817   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:03:51.104971   83344 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1206 19:03:51.104983   83344 round_trippers.go:577] Response Headers:
	I1206 19:03:51.104989   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:03:51 GMT
	I1206 19:03:51.105004   83344 round_trippers.go:580]     Audit-Id: 44483293-5657-416f-bd7d-067a4aa37ea1
	I1206 19:03:51.105012   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:03:51.105021   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:03:51.105030   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:03:51.105044   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:03:51.105357   83344 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-593099","namespace":"kube-system","uid":"bd10545f-240d-418a-b4ca-a48c978a56c9","resourceVersion":"293","creationTimestamp":"2023-12-06T19:03:31Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e0f1a77aff616164d10d488d27b08307","kubernetes.io/config.mirror":"e0f1a77aff616164d10d488d27b08307","kubernetes.io/config.seen":"2023-12-06T19:03:30.652198715Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I1206 19:03:51.105874   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:03:51.105896   83344 round_trippers.go:469] Request Headers:
	I1206 19:03:51.105954   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:03:51.105990   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:03:51.108055   83344 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:03:51.108074   83344 round_trippers.go:577] Response Headers:
	I1206 19:03:51.108083   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:03:51 GMT
	I1206 19:03:51.108092   83344 round_trippers.go:580]     Audit-Id: 98b4d567-4a2f-45fd-82cc-a95e48c17ab1
	I1206 19:03:51.108101   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:03:51.108111   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:03:51.108126   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:03:51.108135   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:03:51.108271   83344 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"378","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1206 19:03:51.108643   83344 pod_ready.go:92] pod "kube-controller-manager-multinode-593099" in "kube-system" namespace has status "Ready":"True"
	I1206 19:03:51.108662   83344 pod_ready.go:81] duration metric: took 6.925313ms waiting for pod "kube-controller-manager-multinode-593099" in "kube-system" namespace to be "Ready" ...
	I1206 19:03:51.108673   83344 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-thqkt" in "kube-system" namespace to be "Ready" ...
	I1206 19:03:51.252180   83344 request.go:629] Waited for 143.445348ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/kube-proxy-thqkt
	I1206 19:03:51.252240   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/kube-proxy-thqkt
	I1206 19:03:51.252245   83344 round_trippers.go:469] Request Headers:
	I1206 19:03:51.252253   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:03:51.252260   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:03:51.255108   83344 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:03:51.255135   83344 round_trippers.go:577] Response Headers:
	I1206 19:03:51.255142   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:03:51.255147   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:03:51 GMT
	I1206 19:03:51.255153   83344 round_trippers.go:580]     Audit-Id: 011e2b6a-faea-4c5f-8299-1c3c4bf0a5d3
	I1206 19:03:51.255158   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:03:51.255163   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:03:51.255168   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:03:51.255362   83344 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-thqkt","generateName":"kube-proxy-","namespace":"kube-system","uid":"0012fda4-56e7-4054-ab90-1704569e66e8","resourceVersion":"368","creationTimestamp":"2023-12-06T19:03:43Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"9bd0b244-d31b-4ce9-a395-f0d7b9ee08be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9bd0b244-d31b-4ce9-a395-f0d7b9ee08be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I1206 19:03:51.452334   83344 request.go:629] Waited for 196.399538ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:03:51.452411   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:03:51.452418   83344 round_trippers.go:469] Request Headers:
	I1206 19:03:51.452430   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:03:51.452449   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:03:51.455566   83344 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1206 19:03:51.455591   83344 round_trippers.go:577] Response Headers:
	I1206 19:03:51.455601   83344 round_trippers.go:580]     Audit-Id: 6d3205c4-ed52-4914-a7d3-3a57589a55e4
	I1206 19:03:51.455609   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:03:51.455617   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:03:51.455626   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:03:51.455634   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:03:51.455641   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:03:51 GMT
	I1206 19:03:51.455860   83344 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"378","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1206 19:03:51.456270   83344 pod_ready.go:92] pod "kube-proxy-thqkt" in "kube-system" namespace has status "Ready":"True"
	I1206 19:03:51.456297   83344 pod_ready.go:81] duration metric: took 347.610117ms waiting for pod "kube-proxy-thqkt" in "kube-system" namespace to be "Ready" ...
	I1206 19:03:51.456309   83344 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-593099" in "kube-system" namespace to be "Ready" ...
	I1206 19:03:51.652865   83344 request.go:629] Waited for 196.455945ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-593099
	I1206 19:03:51.652962   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-593099
	I1206 19:03:51.652980   83344 round_trippers.go:469] Request Headers:
	I1206 19:03:51.652992   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:03:51.653003   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:03:51.655865   83344 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:03:51.655889   83344 round_trippers.go:577] Response Headers:
	I1206 19:03:51.655896   83344 round_trippers.go:580]     Audit-Id: 9af4f5d6-8b07-4199-b64e-7df4eafb7e16
	I1206 19:03:51.655901   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:03:51.655908   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:03:51.655916   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:03:51.655927   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:03:51.655939   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:03:51 GMT
	I1206 19:03:51.656376   83344 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-593099","namespace":"kube-system","uid":"7ae8a659-33ba-4e2b-9211-8d84efe7e5a4","resourceVersion":"281","creationTimestamp":"2023-12-06T19:03:28Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"c031365adbae2937d228cc911fbfd7d4","kubernetes.io/config.mirror":"c031365adbae2937d228cc911fbfd7d4","kubernetes.io/config.seen":"2023-12-06T19:03:21.456083881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I1206 19:03:51.852063   83344 request.go:629] Waited for 195.311171ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:03:51.852140   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:03:51.852145   83344 round_trippers.go:469] Request Headers:
	I1206 19:03:51.852152   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:03:51.852158   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:03:51.854873   83344 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:03:51.854894   83344 round_trippers.go:577] Response Headers:
	I1206 19:03:51.854901   83344 round_trippers.go:580]     Audit-Id: a03f174f-7b38-4df5-a287-2085fabbeb9f
	I1206 19:03:51.854906   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:03:51.854912   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:03:51.854919   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:03:51.854927   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:03:51.854938   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:03:51 GMT
	I1206 19:03:51.855400   83344 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"378","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1206 19:03:51.855695   83344 pod_ready.go:92] pod "kube-scheduler-multinode-593099" in "kube-system" namespace has status "Ready":"True"
	I1206 19:03:51.855709   83344 pod_ready.go:81] duration metric: took 399.387059ms waiting for pod "kube-scheduler-multinode-593099" in "kube-system" namespace to be "Ready" ...
	I1206 19:03:51.855719   83344 pod_ready.go:38] duration metric: took 2.798648435s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 19:03:51.855740   83344 api_server.go:52] waiting for apiserver process to appear ...
	I1206 19:03:51.855783   83344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:03:51.868190   83344 command_runner.go:130] > 1068
	I1206 19:03:51.868287   83344 api_server.go:72] duration metric: took 8.977011252s to wait for apiserver process to appear ...
	I1206 19:03:51.868306   83344 api_server.go:88] waiting for apiserver healthz status ...
	I1206 19:03:51.868330   83344 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I1206 19:03:51.873506   83344 api_server.go:279] https://192.168.39.125:8443/healthz returned 200:
	ok
	I1206 19:03:51.873564   83344 round_trippers.go:463] GET https://192.168.39.125:8443/version
	I1206 19:03:51.873570   83344 round_trippers.go:469] Request Headers:
	I1206 19:03:51.873578   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:03:51.873591   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:03:51.874646   83344 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1206 19:03:51.874667   83344 round_trippers.go:577] Response Headers:
	I1206 19:03:51.874676   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:03:51.874683   83344 round_trippers.go:580]     Content-Length: 264
	I1206 19:03:51.874697   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:03:51 GMT
	I1206 19:03:51.874706   83344 round_trippers.go:580]     Audit-Id: bc5131df-e3f4-46f4-8d37-465f4b926e35
	I1206 19:03:51.874716   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:03:51.874725   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:03:51.874733   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:03:51.874771   83344 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1206 19:03:51.874929   83344 api_server.go:141] control plane version: v1.28.4
	I1206 19:03:51.874952   83344 api_server.go:131] duration metric: took 6.639864ms to wait for apiserver health ...
	I1206 19:03:51.874962   83344 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 19:03:52.052380   83344 request.go:629] Waited for 177.328779ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods
	I1206 19:03:52.052437   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods
	I1206 19:03:52.052443   83344 round_trippers.go:469] Request Headers:
	I1206 19:03:52.052451   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:03:52.052457   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:03:52.060576   83344 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1206 19:03:52.060599   83344 round_trippers.go:577] Response Headers:
	I1206 19:03:52.060606   83344 round_trippers.go:580]     Audit-Id: ee260dc0-72f3-43af-9f66-55ef83d0fdb8
	I1206 19:03:52.060612   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:03:52.060617   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:03:52.060623   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:03:52.060628   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:03:52.060645   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:03:52 GMT
	I1206 19:03:52.062285   83344 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"405"},"items":[{"metadata":{"name":"coredns-5dd5756b68-h6rcq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"85247dde-4cee-482e-8f9b-a9e8f4e7172e","resourceVersion":"399","creationTimestamp":"2023-12-06T19:03:43Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d4bc00ef-7482-4e80-b416-7475ddc04c5d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4bc00ef-7482-4e80-b416-7475ddc04c5d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53996 chars]
	I1206 19:03:52.063959   83344 system_pods.go:59] 8 kube-system pods found
	I1206 19:03:52.063996   83344 system_pods.go:61] "coredns-5dd5756b68-h6rcq" [85247dde-4cee-482e-8f9b-a9e8f4e7172e] Running
	I1206 19:03:52.064005   83344 system_pods.go:61] "etcd-multinode-593099" [17573829-76f1-4718-80d6-248db178e8d0] Running
	I1206 19:03:52.064012   83344 system_pods.go:61] "kindnet-x2r64" [1dafec99-c18b-40ca-8b9d-b5d520390c8c] Running
	I1206 19:03:52.064018   83344 system_pods.go:61] "kube-apiserver-multinode-593099" [c32eea84-5395-4ffd-9fe4-51ae29b0861c] Running
	I1206 19:03:52.064030   83344 system_pods.go:61] "kube-controller-manager-multinode-593099" [bd10545f-240d-418a-b4ca-a48c978a56c9] Running
	I1206 19:03:52.064037   83344 system_pods.go:61] "kube-proxy-thqkt" [0012fda4-56e7-4054-ab90-1704569e66e8] Running
	I1206 19:03:52.064046   83344 system_pods.go:61] "kube-scheduler-multinode-593099" [7ae8a659-33ba-4e2b-9211-8d84efe7e5a4] Running
	I1206 19:03:52.064053   83344 system_pods.go:61] "storage-provisioner" [35974b37-5aff-4940-8e2d-5fec9d1e2166] Running
	I1206 19:03:52.064064   83344 system_pods.go:74] duration metric: took 189.093876ms to wait for pod list to return data ...
	I1206 19:03:52.064073   83344 default_sa.go:34] waiting for default service account to be created ...
	I1206 19:03:52.252587   83344 request.go:629] Waited for 188.406824ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.125:8443/api/v1/namespaces/default/serviceaccounts
	I1206 19:03:52.252658   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/default/serviceaccounts
	I1206 19:03:52.252664   83344 round_trippers.go:469] Request Headers:
	I1206 19:03:52.252671   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:03:52.252678   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:03:52.257003   83344 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1206 19:03:52.257026   83344 round_trippers.go:577] Response Headers:
	I1206 19:03:52.257033   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:03:52.257038   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:03:52.257043   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:03:52.257048   83344 round_trippers.go:580]     Content-Length: 261
	I1206 19:03:52.257053   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:03:52 GMT
	I1206 19:03:52.257061   83344 round_trippers.go:580]     Audit-Id: ae690e84-7103-41f5-8fc1-f4a786597c00
	I1206 19:03:52.257069   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:03:52.257105   83344 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"405"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"96af57ff-2c6a-48e3-9fcf-3f52ff53a1ea","resourceVersion":"298","creationTimestamp":"2023-12-06T19:03:42Z"}}]}
	I1206 19:03:52.257329   83344 default_sa.go:45] found service account: "default"
	I1206 19:03:52.257349   83344 default_sa.go:55] duration metric: took 193.269218ms for default service account to be created ...
	I1206 19:03:52.257364   83344 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 19:03:52.452872   83344 request.go:629] Waited for 195.406043ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods
	I1206 19:03:52.452952   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods
	I1206 19:03:52.452959   83344 round_trippers.go:469] Request Headers:
	I1206 19:03:52.452967   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:03:52.452974   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:03:52.464689   83344 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1206 19:03:52.464722   83344 round_trippers.go:577] Response Headers:
	I1206 19:03:52.464732   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:03:52.464741   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:03:52.464747   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:03:52.464752   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:03:52 GMT
	I1206 19:03:52.464757   83344 round_trippers.go:580]     Audit-Id: a3c2a5cd-3c82-432d-b1b1-370f72a8a966
	I1206 19:03:52.464762   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:03:52.470170   83344 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"405"},"items":[{"metadata":{"name":"coredns-5dd5756b68-h6rcq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"85247dde-4cee-482e-8f9b-a9e8f4e7172e","resourceVersion":"399","creationTimestamp":"2023-12-06T19:03:43Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d4bc00ef-7482-4e80-b416-7475ddc04c5d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4bc00ef-7482-4e80-b416-7475ddc04c5d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53996 chars]
	I1206 19:03:52.471893   83344 system_pods.go:86] 8 kube-system pods found
	I1206 19:03:52.471915   83344 system_pods.go:89] "coredns-5dd5756b68-h6rcq" [85247dde-4cee-482e-8f9b-a9e8f4e7172e] Running
	I1206 19:03:52.471921   83344 system_pods.go:89] "etcd-multinode-593099" [17573829-76f1-4718-80d6-248db178e8d0] Running
	I1206 19:03:52.471925   83344 system_pods.go:89] "kindnet-x2r64" [1dafec99-c18b-40ca-8b9d-b5d520390c8c] Running
	I1206 19:03:52.471930   83344 system_pods.go:89] "kube-apiserver-multinode-593099" [c32eea84-5395-4ffd-9fe4-51ae29b0861c] Running
	I1206 19:03:52.471938   83344 system_pods.go:89] "kube-controller-manager-multinode-593099" [bd10545f-240d-418a-b4ca-a48c978a56c9] Running
	I1206 19:03:52.471945   83344 system_pods.go:89] "kube-proxy-thqkt" [0012fda4-56e7-4054-ab90-1704569e66e8] Running
	I1206 19:03:52.471950   83344 system_pods.go:89] "kube-scheduler-multinode-593099" [7ae8a659-33ba-4e2b-9211-8d84efe7e5a4] Running
	I1206 19:03:52.471956   83344 system_pods.go:89] "storage-provisioner" [35974b37-5aff-4940-8e2d-5fec9d1e2166] Running
	I1206 19:03:52.471966   83344 system_pods.go:126] duration metric: took 214.591692ms to wait for k8s-apps to be running ...
	I1206 19:03:52.471978   83344 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 19:03:52.472023   83344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 19:03:52.487506   83344 system_svc.go:56] duration metric: took 15.518202ms WaitForService to wait for kubelet.
	I1206 19:03:52.487536   83344 kubeadm.go:581] duration metric: took 9.596264279s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1206 19:03:52.487556   83344 node_conditions.go:102] verifying NodePressure condition ...
	I1206 19:03:52.651930   83344 request.go:629] Waited for 164.276107ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.125:8443/api/v1/nodes
	I1206 19:03:52.651986   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes
	I1206 19:03:52.651997   83344 round_trippers.go:469] Request Headers:
	I1206 19:03:52.652008   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:03:52.652023   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:03:52.654917   83344 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:03:52.654941   83344 round_trippers.go:577] Response Headers:
	I1206 19:03:52.654948   83344 round_trippers.go:580]     Audit-Id: 6ed39e15-8597-4a15-9067-32d85d3f4f72
	I1206 19:03:52.654955   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:03:52.654963   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:03:52.654971   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:03:52.654980   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:03:52.654989   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:03:52 GMT
	I1206 19:03:52.655157   83344 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"405"},"items":[{"metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"378","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 5952 chars]
	I1206 19:03:52.655512   83344 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1206 19:03:52.655533   83344 node_conditions.go:123] node cpu capacity is 2
	I1206 19:03:52.655545   83344 node_conditions.go:105] duration metric: took 167.985511ms to run NodePressure ...
	I1206 19:03:52.655556   83344 start.go:228] waiting for startup goroutines ...
	I1206 19:03:52.655564   83344 start.go:233] waiting for cluster config update ...
	I1206 19:03:52.655572   83344 start.go:242] writing updated cluster config ...
	I1206 19:03:52.658049   83344 out.go:177] 
	I1206 19:03:52.659735   83344 config.go:182] Loaded profile config "multinode-593099": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 19:03:52.659848   83344 profile.go:148] Saving config to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/config.json ...
	I1206 19:03:52.661731   83344 out.go:177] * Starting worker node multinode-593099-m02 in cluster multinode-593099
	I1206 19:03:52.663035   83344 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1206 19:03:52.663058   83344 cache.go:56] Caching tarball of preloaded images
	I1206 19:03:52.663146   83344 preload.go:174] Found /home/jenkins/minikube-integration/17740-63652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1206 19:03:52.663157   83344 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1206 19:03:52.663254   83344 profile.go:148] Saving config to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/config.json ...
	I1206 19:03:52.663430   83344 start.go:365] acquiring machines lock for multinode-593099-m02: {Name:mk49ce640266d8c664a871ed4989f65c26b6fae1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1206 19:03:52.663473   83344 start.go:369] acquired machines lock for "multinode-593099-m02" in 25.328µs
	I1206 19:03:52.663522   83344 start.go:93] Provisioning new machine with config: &{Name:multinode-593099 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.4 ClusterName:multinode-593099 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:
true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1206 19:03:52.663588   83344 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1206 19:03:52.665305   83344 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1206 19:03:52.665409   83344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:03:52.665463   83344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:03:52.679621   83344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44433
	I1206 19:03:52.680002   83344 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:03:52.680458   83344 main.go:141] libmachine: Using API Version  1
	I1206 19:03:52.680482   83344 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:03:52.680772   83344 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:03:52.680983   83344 main.go:141] libmachine: (multinode-593099-m02) Calling .GetMachineName
	I1206 19:03:52.681139   83344 main.go:141] libmachine: (multinode-593099-m02) Calling .DriverName
	I1206 19:03:52.681300   83344 start.go:159] libmachine.API.Create for "multinode-593099" (driver="kvm2")
	I1206 19:03:52.681326   83344 client.go:168] LocalClient.Create starting
	I1206 19:03:52.681356   83344 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem
	I1206 19:03:52.681390   83344 main.go:141] libmachine: Decoding PEM data...
	I1206 19:03:52.681407   83344 main.go:141] libmachine: Parsing certificate...
	I1206 19:03:52.681460   83344 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem
	I1206 19:03:52.681477   83344 main.go:141] libmachine: Decoding PEM data...
	I1206 19:03:52.681488   83344 main.go:141] libmachine: Parsing certificate...
	I1206 19:03:52.681504   83344 main.go:141] libmachine: Running pre-create checks...
	I1206 19:03:52.681513   83344 main.go:141] libmachine: (multinode-593099-m02) Calling .PreCreateCheck
	I1206 19:03:52.681683   83344 main.go:141] libmachine: (multinode-593099-m02) Calling .GetConfigRaw
	I1206 19:03:52.682107   83344 main.go:141] libmachine: Creating machine...
	I1206 19:03:52.682121   83344 main.go:141] libmachine: (multinode-593099-m02) Calling .Create
	I1206 19:03:52.682244   83344 main.go:141] libmachine: (multinode-593099-m02) Creating KVM machine...
	I1206 19:03:52.683536   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | found existing default KVM network
	I1206 19:03:52.683692   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | found existing private KVM network mk-multinode-593099
	I1206 19:03:52.683848   83344 main.go:141] libmachine: (multinode-593099-m02) Setting up store path in /home/jenkins/minikube-integration/17740-63652/.minikube/machines/multinode-593099-m02 ...
	I1206 19:03:52.683882   83344 main.go:141] libmachine: (multinode-593099-m02) Building disk image from file:///home/jenkins/minikube-integration/17740-63652/.minikube/cache/iso/amd64/minikube-v1.32.1-1701387192-17703-amd64.iso
	I1206 19:03:52.683924   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | I1206 19:03:52.683814   83700 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17740-63652/.minikube
	I1206 19:03:52.684025   83344 main.go:141] libmachine: (multinode-593099-m02) Downloading /home/jenkins/minikube-integration/17740-63652/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17740-63652/.minikube/cache/iso/amd64/minikube-v1.32.1-1701387192-17703-amd64.iso...
	I1206 19:03:52.904983   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | I1206 19:03:52.904820   83700 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/multinode-593099-m02/id_rsa...
	I1206 19:03:53.224913   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | I1206 19:03:53.224761   83700 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/multinode-593099-m02/multinode-593099-m02.rawdisk...
	I1206 19:03:53.224945   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | Writing magic tar header
	I1206 19:03:53.224992   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | Writing SSH key tar header
	I1206 19:03:53.225031   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | I1206 19:03:53.224897   83700 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17740-63652/.minikube/machines/multinode-593099-m02 ...
	I1206 19:03:53.225044   83344 main.go:141] libmachine: (multinode-593099-m02) Setting executable bit set on /home/jenkins/minikube-integration/17740-63652/.minikube/machines/multinode-593099-m02 (perms=drwx------)
	I1206 19:03:53.225055   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/multinode-593099-m02
	I1206 19:03:53.225072   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17740-63652/.minikube/machines
	I1206 19:03:53.225082   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17740-63652/.minikube
	I1206 19:03:53.225092   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17740-63652
	I1206 19:03:53.225099   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1206 19:03:53.225107   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | Checking permissions on dir: /home/jenkins
	I1206 19:03:53.225113   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | Checking permissions on dir: /home
	I1206 19:03:53.225123   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | Skipping /home - not owner
	I1206 19:03:53.225131   83344 main.go:141] libmachine: (multinode-593099-m02) Setting executable bit set on /home/jenkins/minikube-integration/17740-63652/.minikube/machines (perms=drwxr-xr-x)
	I1206 19:03:53.225144   83344 main.go:141] libmachine: (multinode-593099-m02) Setting executable bit set on /home/jenkins/minikube-integration/17740-63652/.minikube (perms=drwxr-xr-x)
	I1206 19:03:53.225153   83344 main.go:141] libmachine: (multinode-593099-m02) Setting executable bit set on /home/jenkins/minikube-integration/17740-63652 (perms=drwxrwxr-x)
	I1206 19:03:53.225161   83344 main.go:141] libmachine: (multinode-593099-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1206 19:03:53.225168   83344 main.go:141] libmachine: (multinode-593099-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1206 19:03:53.225176   83344 main.go:141] libmachine: (multinode-593099-m02) Creating domain...
	I1206 19:03:53.226157   83344 main.go:141] libmachine: (multinode-593099-m02) define libvirt domain using xml: 
	I1206 19:03:53.226185   83344 main.go:141] libmachine: (multinode-593099-m02) <domain type='kvm'>
	I1206 19:03:53.226204   83344 main.go:141] libmachine: (multinode-593099-m02)   <name>multinode-593099-m02</name>
	I1206 19:03:53.226223   83344 main.go:141] libmachine: (multinode-593099-m02)   <memory unit='MiB'>2200</memory>
	I1206 19:03:53.226248   83344 main.go:141] libmachine: (multinode-593099-m02)   <vcpu>2</vcpu>
	I1206 19:03:53.226267   83344 main.go:141] libmachine: (multinode-593099-m02)   <features>
	I1206 19:03:53.226276   83344 main.go:141] libmachine: (multinode-593099-m02)     <acpi/>
	I1206 19:03:53.226284   83344 main.go:141] libmachine: (multinode-593099-m02)     <apic/>
	I1206 19:03:53.226290   83344 main.go:141] libmachine: (multinode-593099-m02)     <pae/>
	I1206 19:03:53.226298   83344 main.go:141] libmachine: (multinode-593099-m02)     
	I1206 19:03:53.226308   83344 main.go:141] libmachine: (multinode-593099-m02)   </features>
	I1206 19:03:53.226317   83344 main.go:141] libmachine: (multinode-593099-m02)   <cpu mode='host-passthrough'>
	I1206 19:03:53.226325   83344 main.go:141] libmachine: (multinode-593099-m02)   
	I1206 19:03:53.226345   83344 main.go:141] libmachine: (multinode-593099-m02)   </cpu>
	I1206 19:03:53.226358   83344 main.go:141] libmachine: (multinode-593099-m02)   <os>
	I1206 19:03:53.226371   83344 main.go:141] libmachine: (multinode-593099-m02)     <type>hvm</type>
	I1206 19:03:53.226382   83344 main.go:141] libmachine: (multinode-593099-m02)     <boot dev='cdrom'/>
	I1206 19:03:53.226395   83344 main.go:141] libmachine: (multinode-593099-m02)     <boot dev='hd'/>
	I1206 19:03:53.226407   83344 main.go:141] libmachine: (multinode-593099-m02)     <bootmenu enable='no'/>
	I1206 19:03:53.226418   83344 main.go:141] libmachine: (multinode-593099-m02)   </os>
	I1206 19:03:53.226430   83344 main.go:141] libmachine: (multinode-593099-m02)   <devices>
	I1206 19:03:53.226438   83344 main.go:141] libmachine: (multinode-593099-m02)     <disk type='file' device='cdrom'>
	I1206 19:03:53.226450   83344 main.go:141] libmachine: (multinode-593099-m02)       <source file='/home/jenkins/minikube-integration/17740-63652/.minikube/machines/multinode-593099-m02/boot2docker.iso'/>
	I1206 19:03:53.226461   83344 main.go:141] libmachine: (multinode-593099-m02)       <target dev='hdc' bus='scsi'/>
	I1206 19:03:53.226477   83344 main.go:141] libmachine: (multinode-593099-m02)       <readonly/>
	I1206 19:03:53.226493   83344 main.go:141] libmachine: (multinode-593099-m02)     </disk>
	I1206 19:03:53.226505   83344 main.go:141] libmachine: (multinode-593099-m02)     <disk type='file' device='disk'>
	I1206 19:03:53.226515   83344 main.go:141] libmachine: (multinode-593099-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1206 19:03:53.226527   83344 main.go:141] libmachine: (multinode-593099-m02)       <source file='/home/jenkins/minikube-integration/17740-63652/.minikube/machines/multinode-593099-m02/multinode-593099-m02.rawdisk'/>
	I1206 19:03:53.226536   83344 main.go:141] libmachine: (multinode-593099-m02)       <target dev='hda' bus='virtio'/>
	I1206 19:03:53.226547   83344 main.go:141] libmachine: (multinode-593099-m02)     </disk>
	I1206 19:03:53.226564   83344 main.go:141] libmachine: (multinode-593099-m02)     <interface type='network'>
	I1206 19:03:53.226579   83344 main.go:141] libmachine: (multinode-593099-m02)       <source network='mk-multinode-593099'/>
	I1206 19:03:53.226592   83344 main.go:141] libmachine: (multinode-593099-m02)       <model type='virtio'/>
	I1206 19:03:53.226606   83344 main.go:141] libmachine: (multinode-593099-m02)     </interface>
	I1206 19:03:53.226619   83344 main.go:141] libmachine: (multinode-593099-m02)     <interface type='network'>
	I1206 19:03:53.226642   83344 main.go:141] libmachine: (multinode-593099-m02)       <source network='default'/>
	I1206 19:03:53.226658   83344 main.go:141] libmachine: (multinode-593099-m02)       <model type='virtio'/>
	I1206 19:03:53.226668   83344 main.go:141] libmachine: (multinode-593099-m02)     </interface>
	I1206 19:03:53.226676   83344 main.go:141] libmachine: (multinode-593099-m02)     <serial type='pty'>
	I1206 19:03:53.226682   83344 main.go:141] libmachine: (multinode-593099-m02)       <target port='0'/>
	I1206 19:03:53.226690   83344 main.go:141] libmachine: (multinode-593099-m02)     </serial>
	I1206 19:03:53.226705   83344 main.go:141] libmachine: (multinode-593099-m02)     <console type='pty'>
	I1206 19:03:53.226717   83344 main.go:141] libmachine: (multinode-593099-m02)       <target type='serial' port='0'/>
	I1206 19:03:53.226737   83344 main.go:141] libmachine: (multinode-593099-m02)     </console>
	I1206 19:03:53.226753   83344 main.go:141] libmachine: (multinode-593099-m02)     <rng model='virtio'>
	I1206 19:03:53.226773   83344 main.go:141] libmachine: (multinode-593099-m02)       <backend model='random'>/dev/random</backend>
	I1206 19:03:53.226791   83344 main.go:141] libmachine: (multinode-593099-m02)     </rng>
	I1206 19:03:53.226805   83344 main.go:141] libmachine: (multinode-593099-m02)     
	I1206 19:03:53.226817   83344 main.go:141] libmachine: (multinode-593099-m02)     
	I1206 19:03:53.226830   83344 main.go:141] libmachine: (multinode-593099-m02)   </devices>
	I1206 19:03:53.226842   83344 main.go:141] libmachine: (multinode-593099-m02) </domain>
	I1206 19:03:53.226859   83344 main.go:141] libmachine: (multinode-593099-m02) 
	I1206 19:03:53.233958   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined MAC address 52:54:00:e4:5a:46 in network default
	I1206 19:03:53.234506   83344 main.go:141] libmachine: (multinode-593099-m02) Ensuring networks are active...
	I1206 19:03:53.234527   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:03:53.235314   83344 main.go:141] libmachine: (multinode-593099-m02) Ensuring network default is active
	I1206 19:03:53.235662   83344 main.go:141] libmachine: (multinode-593099-m02) Ensuring network mk-multinode-593099 is active
	I1206 19:03:53.236036   83344 main.go:141] libmachine: (multinode-593099-m02) Getting domain xml...
	I1206 19:03:53.236731   83344 main.go:141] libmachine: (multinode-593099-m02) Creating domain...
	I1206 19:03:54.474645   83344 main.go:141] libmachine: (multinode-593099-m02) Waiting to get IP...
	I1206 19:03:54.475563   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:03:54.475979   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | unable to find current IP address of domain multinode-593099-m02 in network mk-multinode-593099
	I1206 19:03:54.476005   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | I1206 19:03:54.475937   83700 retry.go:31] will retry after 193.266744ms: waiting for machine to come up
	I1206 19:03:54.671299   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:03:54.671809   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | unable to find current IP address of domain multinode-593099-m02 in network mk-multinode-593099
	I1206 19:03:54.671831   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | I1206 19:03:54.671771   83700 retry.go:31] will retry after 357.244354ms: waiting for machine to come up
	I1206 19:03:55.030299   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:03:55.030717   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | unable to find current IP address of domain multinode-593099-m02 in network mk-multinode-593099
	I1206 19:03:55.030745   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | I1206 19:03:55.030679   83700 retry.go:31] will retry after 386.247997ms: waiting for machine to come up
	I1206 19:03:55.418358   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:03:55.418775   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | unable to find current IP address of domain multinode-593099-m02 in network mk-multinode-593099
	I1206 19:03:55.418821   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | I1206 19:03:55.418726   83700 retry.go:31] will retry after 522.075215ms: waiting for machine to come up
	I1206 19:03:55.942992   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:03:55.943431   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | unable to find current IP address of domain multinode-593099-m02 in network mk-multinode-593099
	I1206 19:03:55.943465   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | I1206 19:03:55.943377   83700 retry.go:31] will retry after 529.210642ms: waiting for machine to come up
	I1206 19:03:56.474276   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:03:56.474712   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | unable to find current IP address of domain multinode-593099-m02 in network mk-multinode-593099
	I1206 19:03:56.474736   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | I1206 19:03:56.474668   83700 retry.go:31] will retry after 840.390031ms: waiting for machine to come up
	I1206 19:03:57.316847   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:03:57.317356   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | unable to find current IP address of domain multinode-593099-m02 in network mk-multinode-593099
	I1206 19:03:57.317378   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | I1206 19:03:57.317309   83700 retry.go:31] will retry after 1.189674889s: waiting for machine to come up
	I1206 19:03:58.508211   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:03:58.508627   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | unable to find current IP address of domain multinode-593099-m02 in network mk-multinode-593099
	I1206 19:03:58.508665   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | I1206 19:03:58.508563   83700 retry.go:31] will retry after 1.333308048s: waiting for machine to come up
	I1206 19:03:59.843254   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:03:59.843736   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | unable to find current IP address of domain multinode-593099-m02 in network mk-multinode-593099
	I1206 19:03:59.843760   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | I1206 19:03:59.843681   83700 retry.go:31] will retry after 1.459851071s: waiting for machine to come up
	I1206 19:04:01.305654   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:04:01.306243   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | unable to find current IP address of domain multinode-593099-m02 in network mk-multinode-593099
	I1206 19:04:01.306282   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | I1206 19:04:01.306156   83700 retry.go:31] will retry after 1.621999826s: waiting for machine to come up
	I1206 19:04:02.929385   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:04:02.929883   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | unable to find current IP address of domain multinode-593099-m02 in network mk-multinode-593099
	I1206 19:04:02.929915   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | I1206 19:04:02.929821   83700 retry.go:31] will retry after 2.820137863s: waiting for machine to come up
	I1206 19:04:05.753833   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:04:05.754338   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | unable to find current IP address of domain multinode-593099-m02 in network mk-multinode-593099
	I1206 19:04:05.754367   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | I1206 19:04:05.754281   83700 retry.go:31] will retry after 2.92221765s: waiting for machine to come up
	I1206 19:04:08.678259   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:04:08.678704   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | unable to find current IP address of domain multinode-593099-m02 in network mk-multinode-593099
	I1206 19:04:08.678746   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | I1206 19:04:08.678682   83700 retry.go:31] will retry after 4.423631725s: waiting for machine to come up
	I1206 19:04:13.104202   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:04:13.104672   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | unable to find current IP address of domain multinode-593099-m02 in network mk-multinode-593099
	I1206 19:04:13.104700   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | I1206 19:04:13.104620   83700 retry.go:31] will retry after 4.866625622s: waiting for machine to come up
	I1206 19:04:17.972495   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:04:17.973053   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has current primary IP address 192.168.39.6 and MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:04:17.973082   83344 main.go:141] libmachine: (multinode-593099-m02) Found IP for machine: 192.168.39.6
	I1206 19:04:17.973098   83344 main.go:141] libmachine: (multinode-593099-m02) Reserving static IP address...
	I1206 19:04:17.973530   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | unable to find host DHCP lease matching {name: "multinode-593099-m02", mac: "52:54:00:49:67:33", ip: "192.168.39.6"} in network mk-multinode-593099
	I1206 19:04:18.047451   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | Getting to WaitForSSH function...
	I1206 19:04:18.047493   83344 main.go:141] libmachine: (multinode-593099-m02) Reserved static IP address: 192.168.39.6
	I1206 19:04:18.047510   83344 main.go:141] libmachine: (multinode-593099-m02) Waiting for SSH to be available...
	I1206 19:04:18.049988   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:04:18.050476   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:67:33", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:04:08 +0000 UTC Type:0 Mac:52:54:00:49:67:33 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:minikube Clientid:01:52:54:00:49:67:33}
	I1206 19:04:18.050508   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:04:18.050733   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | Using SSH client type: external
	I1206 19:04:18.050761   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/multinode-593099-m02/id_rsa (-rw-------)
	I1206 19:04:18.050803   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.6 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17740-63652/.minikube/machines/multinode-593099-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1206 19:04:18.050823   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | About to run SSH command:
	I1206 19:04:18.050851   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | exit 0
	I1206 19:04:18.141084   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | SSH cmd err, output: <nil>: 
	I1206 19:04:18.141355   83344 main.go:141] libmachine: (multinode-593099-m02) KVM machine creation complete!
	I1206 19:04:18.141694   83344 main.go:141] libmachine: (multinode-593099-m02) Calling .GetConfigRaw
	I1206 19:04:18.142228   83344 main.go:141] libmachine: (multinode-593099-m02) Calling .DriverName
	I1206 19:04:18.142446   83344 main.go:141] libmachine: (multinode-593099-m02) Calling .DriverName
	I1206 19:04:18.142651   83344 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1206 19:04:18.142669   83344 main.go:141] libmachine: (multinode-593099-m02) Calling .GetState
	I1206 19:04:18.144011   83344 main.go:141] libmachine: Detecting operating system of created instance...
	I1206 19:04:18.144026   83344 main.go:141] libmachine: Waiting for SSH to be available...
	I1206 19:04:18.144032   83344 main.go:141] libmachine: Getting to WaitForSSH function...
	I1206 19:04:18.144039   83344 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHHostname
	I1206 19:04:18.146164   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:04:18.146501   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:67:33", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:04:08 +0000 UTC Type:0 Mac:52:54:00:49:67:33 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-593099-m02 Clientid:01:52:54:00:49:67:33}
	I1206 19:04:18.146529   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:04:18.146645   83344 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHPort
	I1206 19:04:18.146826   83344 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHKeyPath
	I1206 19:04:18.147030   83344 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHKeyPath
	I1206 19:04:18.147225   83344 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHUsername
	I1206 19:04:18.147395   83344 main.go:141] libmachine: Using SSH client type: native
	I1206 19:04:18.147803   83344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I1206 19:04:18.147819   83344 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1206 19:04:18.264564   83344 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1206 19:04:18.264592   83344 main.go:141] libmachine: Detecting the provisioner...
	I1206 19:04:18.264601   83344 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHHostname
	I1206 19:04:18.267561   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:04:18.267978   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:67:33", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:04:08 +0000 UTC Type:0 Mac:52:54:00:49:67:33 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-593099-m02 Clientid:01:52:54:00:49:67:33}
	I1206 19:04:18.268012   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:04:18.268222   83344 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHPort
	I1206 19:04:18.268389   83344 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHKeyPath
	I1206 19:04:18.268523   83344 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHKeyPath
	I1206 19:04:18.268676   83344 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHUsername
	I1206 19:04:18.268820   83344 main.go:141] libmachine: Using SSH client type: native
	I1206 19:04:18.269131   83344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I1206 19:04:18.269147   83344 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1206 19:04:18.386057   83344 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gf888a99-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1206 19:04:18.386139   83344 main.go:141] libmachine: found compatible host: buildroot
	I1206 19:04:18.386151   83344 main.go:141] libmachine: Provisioning with buildroot...
	I1206 19:04:18.386166   83344 main.go:141] libmachine: (multinode-593099-m02) Calling .GetMachineName
	I1206 19:04:18.386419   83344 buildroot.go:166] provisioning hostname "multinode-593099-m02"
	I1206 19:04:18.386444   83344 main.go:141] libmachine: (multinode-593099-m02) Calling .GetMachineName
	I1206 19:04:18.386603   83344 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHHostname
	I1206 19:04:18.388979   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:04:18.389374   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:67:33", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:04:08 +0000 UTC Type:0 Mac:52:54:00:49:67:33 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-593099-m02 Clientid:01:52:54:00:49:67:33}
	I1206 19:04:18.389409   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:04:18.389536   83344 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHPort
	I1206 19:04:18.389737   83344 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHKeyPath
	I1206 19:04:18.389885   83344 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHKeyPath
	I1206 19:04:18.390062   83344 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHUsername
	I1206 19:04:18.390261   83344 main.go:141] libmachine: Using SSH client type: native
	I1206 19:04:18.390593   83344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I1206 19:04:18.390608   83344 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-593099-m02 && echo "multinode-593099-m02" | sudo tee /etc/hostname
	I1206 19:04:18.519199   83344 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-593099-m02
	
	I1206 19:04:18.519231   83344 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHHostname
	I1206 19:04:18.522275   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:04:18.522656   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:67:33", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:04:08 +0000 UTC Type:0 Mac:52:54:00:49:67:33 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-593099-m02 Clientid:01:52:54:00:49:67:33}
	I1206 19:04:18.522692   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:04:18.522873   83344 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHPort
	I1206 19:04:18.523063   83344 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHKeyPath
	I1206 19:04:18.523250   83344 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHKeyPath
	I1206 19:04:18.523374   83344 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHUsername
	I1206 19:04:18.523529   83344 main.go:141] libmachine: Using SSH client type: native
	I1206 19:04:18.523900   83344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I1206 19:04:18.523921   83344 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-593099-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-593099-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-593099-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 19:04:18.645609   83344 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1206 19:04:18.645639   83344 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17740-63652/.minikube CaCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17740-63652/.minikube}
	I1206 19:04:18.645655   83344 buildroot.go:174] setting up certificates
	I1206 19:04:18.645666   83344 provision.go:83] configureAuth start
	I1206 19:04:18.645678   83344 main.go:141] libmachine: (multinode-593099-m02) Calling .GetMachineName
	I1206 19:04:18.645978   83344 main.go:141] libmachine: (multinode-593099-m02) Calling .GetIP
	I1206 19:04:18.648577   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:04:18.649014   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:67:33", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:04:08 +0000 UTC Type:0 Mac:52:54:00:49:67:33 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-593099-m02 Clientid:01:52:54:00:49:67:33}
	I1206 19:04:18.649047   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:04:18.649154   83344 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHHostname
	I1206 19:04:18.651261   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:04:18.651533   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:67:33", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:04:08 +0000 UTC Type:0 Mac:52:54:00:49:67:33 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-593099-m02 Clientid:01:52:54:00:49:67:33}
	I1206 19:04:18.651562   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:04:18.651726   83344 provision.go:138] copyHostCerts
	I1206 19:04:18.651755   83344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem
	I1206 19:04:18.651791   83344 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem, removing ...
	I1206 19:04:18.651803   83344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem
	I1206 19:04:18.651892   83344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem (1123 bytes)
	I1206 19:04:18.651980   83344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem
	I1206 19:04:18.652005   83344 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem, removing ...
	I1206 19:04:18.652015   83344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem
	I1206 19:04:18.652051   83344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem (1679 bytes)
	I1206 19:04:18.652108   83344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem
	I1206 19:04:18.652132   83344 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem, removing ...
	I1206 19:04:18.652141   83344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem
	I1206 19:04:18.652176   83344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem (1082 bytes)
	I1206 19:04:18.652241   83344 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem org=jenkins.multinode-593099-m02 san=[192.168.39.6 192.168.39.6 localhost 127.0.0.1 minikube multinode-593099-m02]
	I1206 19:04:18.754555   83344 provision.go:172] copyRemoteCerts
	I1206 19:04:18.754614   83344 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 19:04:18.754653   83344 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHHostname
	I1206 19:04:18.757281   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:04:18.757595   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:67:33", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:04:08 +0000 UTC Type:0 Mac:52:54:00:49:67:33 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-593099-m02 Clientid:01:52:54:00:49:67:33}
	I1206 19:04:18.757624   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:04:18.757756   83344 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHPort
	I1206 19:04:18.757938   83344 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHKeyPath
	I1206 19:04:18.758100   83344 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHUsername
	I1206 19:04:18.758234   83344 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/multinode-593099-m02/id_rsa Username:docker}
	I1206 19:04:18.841947   83344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1206 19:04:18.842025   83344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 19:04:18.865990   83344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1206 19:04:18.866076   83344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1206 19:04:18.889314   83344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1206 19:04:18.889383   83344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 19:04:18.912716   83344 provision.go:86] duration metric: configureAuth took 267.032541ms
	I1206 19:04:18.912756   83344 buildroot.go:189] setting minikube options for container-runtime
	I1206 19:04:18.913003   83344 config.go:182] Loaded profile config "multinode-593099": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 19:04:18.913122   83344 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHHostname
	I1206 19:04:18.915784   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:04:18.916189   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:67:33", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:04:08 +0000 UTC Type:0 Mac:52:54:00:49:67:33 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-593099-m02 Clientid:01:52:54:00:49:67:33}
	I1206 19:04:18.916222   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:04:18.916435   83344 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHPort
	I1206 19:04:18.916640   83344 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHKeyPath
	I1206 19:04:18.916799   83344 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHKeyPath
	I1206 19:04:18.917002   83344 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHUsername
	I1206 19:04:18.917173   83344 main.go:141] libmachine: Using SSH client type: native
	I1206 19:04:18.917519   83344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I1206 19:04:18.917536   83344 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 19:04:19.228980   83344 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 19:04:19.229010   83344 main.go:141] libmachine: Checking connection to Docker...
	I1206 19:04:19.229023   83344 main.go:141] libmachine: (multinode-593099-m02) Calling .GetURL
	I1206 19:04:19.230341   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | Using libvirt version 6000000
	I1206 19:04:19.232753   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:04:19.233130   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:67:33", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:04:08 +0000 UTC Type:0 Mac:52:54:00:49:67:33 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-593099-m02 Clientid:01:52:54:00:49:67:33}
	I1206 19:04:19.233158   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:04:19.233353   83344 main.go:141] libmachine: Docker is up and running!
	I1206 19:04:19.233370   83344 main.go:141] libmachine: Reticulating splines...
	I1206 19:04:19.233378   83344 client.go:171] LocalClient.Create took 26.552044405s
	I1206 19:04:19.233490   83344 start.go:167] duration metric: libmachine.API.Create for "multinode-593099" took 26.552186929s
	I1206 19:04:19.233509   83344 start.go:300] post-start starting for "multinode-593099-m02" (driver="kvm2")
	I1206 19:04:19.233523   83344 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 19:04:19.233551   83344 main.go:141] libmachine: (multinode-593099-m02) Calling .DriverName
	I1206 19:04:19.233835   83344 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 19:04:19.233867   83344 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHHostname
	I1206 19:04:19.235974   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:04:19.236280   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:67:33", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:04:08 +0000 UTC Type:0 Mac:52:54:00:49:67:33 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-593099-m02 Clientid:01:52:54:00:49:67:33}
	I1206 19:04:19.236307   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:04:19.236383   83344 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHPort
	I1206 19:04:19.236556   83344 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHKeyPath
	I1206 19:04:19.236722   83344 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHUsername
	I1206 19:04:19.236863   83344 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/multinode-593099-m02/id_rsa Username:docker}
	I1206 19:04:19.322234   83344 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 19:04:19.326059   83344 command_runner.go:130] > NAME=Buildroot
	I1206 19:04:19.326077   83344 command_runner.go:130] > VERSION=2021.02.12-1-gf888a99-dirty
	I1206 19:04:19.326082   83344 command_runner.go:130] > ID=buildroot
	I1206 19:04:19.326087   83344 command_runner.go:130] > VERSION_ID=2021.02.12
	I1206 19:04:19.326092   83344 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1206 19:04:19.326292   83344 info.go:137] Remote host: Buildroot 2021.02.12
	I1206 19:04:19.326322   83344 filesync.go:126] Scanning /home/jenkins/minikube-integration/17740-63652/.minikube/addons for local assets ...
	I1206 19:04:19.326384   83344 filesync.go:126] Scanning /home/jenkins/minikube-integration/17740-63652/.minikube/files for local assets ...
	I1206 19:04:19.326472   83344 filesync.go:149] local asset: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem -> 708342.pem in /etc/ssl/certs
	I1206 19:04:19.326486   83344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem -> /etc/ssl/certs/708342.pem
	I1206 19:04:19.326594   83344 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 19:04:19.334539   83344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem --> /etc/ssl/certs/708342.pem (1708 bytes)
	I1206 19:04:19.358722   83344 start.go:303] post-start completed in 125.197764ms
	I1206 19:04:19.358785   83344 main.go:141] libmachine: (multinode-593099-m02) Calling .GetConfigRaw
	I1206 19:04:19.359360   83344 main.go:141] libmachine: (multinode-593099-m02) Calling .GetIP
	I1206 19:04:19.362113   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:04:19.362461   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:67:33", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:04:08 +0000 UTC Type:0 Mac:52:54:00:49:67:33 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-593099-m02 Clientid:01:52:54:00:49:67:33}
	I1206 19:04:19.362487   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:04:19.362700   83344 profile.go:148] Saving config to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/config.json ...
	I1206 19:04:19.362897   83344 start.go:128] duration metric: createHost completed in 26.699299223s
	I1206 19:04:19.362919   83344 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHHostname
	I1206 19:04:19.365033   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:04:19.365413   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:67:33", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:04:08 +0000 UTC Type:0 Mac:52:54:00:49:67:33 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-593099-m02 Clientid:01:52:54:00:49:67:33}
	I1206 19:04:19.365439   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:04:19.365622   83344 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHPort
	I1206 19:04:19.365810   83344 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHKeyPath
	I1206 19:04:19.365946   83344 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHKeyPath
	I1206 19:04:19.366091   83344 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHUsername
	I1206 19:04:19.366227   83344 main.go:141] libmachine: Using SSH client type: native
	I1206 19:04:19.366535   83344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I1206 19:04:19.366546   83344 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1206 19:04:19.482368   83344 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701889459.468191949
	
	I1206 19:04:19.482396   83344 fix.go:206] guest clock: 1701889459.468191949
	I1206 19:04:19.482406   83344 fix.go:219] Guest: 2023-12-06 19:04:19.468191949 +0000 UTC Remote: 2023-12-06 19:04:19.362907619 +0000 UTC m=+93.353025949 (delta=105.28433ms)
	I1206 19:04:19.482430   83344 fix.go:190] guest clock delta is within tolerance: 105.28433ms
	I1206 19:04:19.482437   83344 start.go:83] releasing machines lock for "multinode-593099-m02", held for 26.818953732s
	I1206 19:04:19.482466   83344 main.go:141] libmachine: (multinode-593099-m02) Calling .DriverName
	I1206 19:04:19.482747   83344 main.go:141] libmachine: (multinode-593099-m02) Calling .GetIP
	I1206 19:04:19.485110   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:04:19.485468   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:67:33", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:04:08 +0000 UTC Type:0 Mac:52:54:00:49:67:33 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-593099-m02 Clientid:01:52:54:00:49:67:33}
	I1206 19:04:19.485505   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:04:19.487770   83344 out.go:177] * Found network options:
	I1206 19:04:19.489246   83344 out.go:177]   - NO_PROXY=192.168.39.125
	W1206 19:04:19.490597   83344 proxy.go:119] fail to check proxy env: Error ip not in block
	I1206 19:04:19.490630   83344 main.go:141] libmachine: (multinode-593099-m02) Calling .DriverName
	I1206 19:04:19.491269   83344 main.go:141] libmachine: (multinode-593099-m02) Calling .DriverName
	I1206 19:04:19.491454   83344 main.go:141] libmachine: (multinode-593099-m02) Calling .DriverName
	I1206 19:04:19.491552   83344 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 19:04:19.491594   83344 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHHostname
	W1206 19:04:19.491629   83344 proxy.go:119] fail to check proxy env: Error ip not in block
	I1206 19:04:19.491714   83344 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 19:04:19.491738   83344 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHHostname
	I1206 19:04:19.494258   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:04:19.494601   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:67:33", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:04:08 +0000 UTC Type:0 Mac:52:54:00:49:67:33 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-593099-m02 Clientid:01:52:54:00:49:67:33}
	I1206 19:04:19.494629   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:04:19.494655   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:04:19.494804   83344 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHPort
	I1206 19:04:19.495003   83344 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHKeyPath
	I1206 19:04:19.495079   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:67:33", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:04:08 +0000 UTC Type:0 Mac:52:54:00:49:67:33 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-593099-m02 Clientid:01:52:54:00:49:67:33}
	I1206 19:04:19.495117   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:04:19.495174   83344 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHUsername
	I1206 19:04:19.495277   83344 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHPort
	I1206 19:04:19.495362   83344 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/multinode-593099-m02/id_rsa Username:docker}
	I1206 19:04:19.495440   83344 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHKeyPath
	I1206 19:04:19.495577   83344 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHUsername
	I1206 19:04:19.495722   83344 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/multinode-593099-m02/id_rsa Username:docker}
	I1206 19:04:19.731650   83344 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1206 19:04:19.731654   83344 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1206 19:04:19.738083   83344 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1206 19:04:19.738324   83344 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 19:04:19.738417   83344 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 19:04:19.752886   83344 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1206 19:04:19.752934   83344 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1206 19:04:19.752945   83344 start.go:475] detecting cgroup driver to use...
	I1206 19:04:19.753014   83344 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 19:04:19.766708   83344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 19:04:19.778670   83344 docker.go:203] disabling cri-docker service (if available) ...
	I1206 19:04:19.778737   83344 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 19:04:19.790865   83344 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 19:04:19.804301   83344 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 19:04:19.925527   83344 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I1206 19:04:19.925623   83344 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 19:04:20.066355   83344 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1206 19:04:20.066394   83344 docker.go:219] disabling docker service ...
	I1206 19:04:20.066437   83344 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 19:04:20.081282   83344 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 19:04:20.092576   83344 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I1206 19:04:20.092772   83344 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 19:04:20.204783   83344 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1206 19:04:20.204928   83344 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 19:04:20.218356   83344 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I1206 19:04:20.218680   83344 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1206 19:04:20.318235   83344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 19:04:20.330657   83344 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 19:04:20.347443   83344 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1206 19:04:20.347784   83344 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1206 19:04:20.347855   83344 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:04:20.356836   83344 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 19:04:20.356911   83344 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:04:20.365756   83344 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:04:20.374545   83344 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:04:20.383457   83344 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 19:04:20.392772   83344 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 19:04:20.400706   83344 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1206 19:04:20.400745   83344 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1206 19:04:20.400787   83344 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1206 19:04:20.413040   83344 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 19:04:20.422264   83344 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 19:04:20.533773   83344 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 19:04:20.697282   83344 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 19:04:20.697355   83344 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 19:04:20.702267   83344 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1206 19:04:20.702288   83344 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1206 19:04:20.702296   83344 command_runner.go:130] > Device: 16h/22d	Inode: 792         Links: 1
	I1206 19:04:20.702303   83344 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1206 19:04:20.702309   83344 command_runner.go:130] > Access: 2023-12-06 19:04:20.669156474 +0000
	I1206 19:04:20.702328   83344 command_runner.go:130] > Modify: 2023-12-06 19:04:20.669156474 +0000
	I1206 19:04:20.702339   83344 command_runner.go:130] > Change: 2023-12-06 19:04:20.669156474 +0000
	I1206 19:04:20.702346   83344 command_runner.go:130] >  Birth: -
	I1206 19:04:20.702619   83344 start.go:543] Will wait 60s for crictl version
	I1206 19:04:20.702673   83344 ssh_runner.go:195] Run: which crictl
	I1206 19:04:20.705997   83344 command_runner.go:130] > /usr/bin/crictl
	I1206 19:04:20.706192   83344 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1206 19:04:20.751897   83344 command_runner.go:130] > Version:  0.1.0
	I1206 19:04:20.751919   83344 command_runner.go:130] > RuntimeName:  cri-o
	I1206 19:04:20.751924   83344 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1206 19:04:20.751928   83344 command_runner.go:130] > RuntimeApiVersion:  v1
	I1206 19:04:20.751950   83344 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1206 19:04:20.752029   83344 ssh_runner.go:195] Run: crio --version
	I1206 19:04:20.796950   83344 command_runner.go:130] > crio version 1.24.1
	I1206 19:04:20.796973   83344 command_runner.go:130] > Version:          1.24.1
	I1206 19:04:20.796980   83344 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1206 19:04:20.796984   83344 command_runner.go:130] > GitTreeState:     dirty
	I1206 19:04:20.796990   83344 command_runner.go:130] > BuildDate:        2023-12-01T05:08:03Z
	I1206 19:04:20.796996   83344 command_runner.go:130] > GoVersion:        go1.19.9
	I1206 19:04:20.797000   83344 command_runner.go:130] > Compiler:         gc
	I1206 19:04:20.797005   83344 command_runner.go:130] > Platform:         linux/amd64
	I1206 19:04:20.797010   83344 command_runner.go:130] > Linkmode:         dynamic
	I1206 19:04:20.797026   83344 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1206 19:04:20.797031   83344 command_runner.go:130] > SeccompEnabled:   true
	I1206 19:04:20.797035   83344 command_runner.go:130] > AppArmorEnabled:  false
	I1206 19:04:20.798439   83344 ssh_runner.go:195] Run: crio --version
	I1206 19:04:20.842557   83344 command_runner.go:130] > crio version 1.24.1
	I1206 19:04:20.842585   83344 command_runner.go:130] > Version:          1.24.1
	I1206 19:04:20.842592   83344 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1206 19:04:20.842596   83344 command_runner.go:130] > GitTreeState:     dirty
	I1206 19:04:20.842603   83344 command_runner.go:130] > BuildDate:        2023-12-01T05:08:03Z
	I1206 19:04:20.842607   83344 command_runner.go:130] > GoVersion:        go1.19.9
	I1206 19:04:20.842611   83344 command_runner.go:130] > Compiler:         gc
	I1206 19:04:20.842616   83344 command_runner.go:130] > Platform:         linux/amd64
	I1206 19:04:20.842621   83344 command_runner.go:130] > Linkmode:         dynamic
	I1206 19:04:20.842632   83344 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1206 19:04:20.842641   83344 command_runner.go:130] > SeccompEnabled:   true
	I1206 19:04:20.842648   83344 command_runner.go:130] > AppArmorEnabled:  false
	I1206 19:04:20.845986   83344 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1206 19:04:20.847435   83344 out.go:177]   - env NO_PROXY=192.168.39.125
	I1206 19:04:20.848696   83344 main.go:141] libmachine: (multinode-593099-m02) Calling .GetIP
	I1206 19:04:20.851273   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:04:20.851669   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:67:33", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:04:08 +0000 UTC Type:0 Mac:52:54:00:49:67:33 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-593099-m02 Clientid:01:52:54:00:49:67:33}
	I1206 19:04:20.851707   83344 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:04:20.851886   83344 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1206 19:04:20.855939   83344 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 19:04:20.869106   83344 certs.go:56] Setting up /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099 for IP: 192.168.39.6
	I1206 19:04:20.869136   83344 certs.go:190] acquiring lock for shared ca certs: {Name:mkf8fbf7b590617ef4dc6c3a4acb742ae26f89ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 19:04:20.869347   83344 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.key
	I1206 19:04:20.869400   83344 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.key
	I1206 19:04:20.869416   83344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1206 19:04:20.869432   83344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1206 19:04:20.869445   83344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1206 19:04:20.869460   83344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1206 19:04:20.869543   83344 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834.pem (1338 bytes)
	W1206 19:04:20.869590   83344 certs.go:433] ignoring /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834_empty.pem, impossibly tiny 0 bytes
	I1206 19:04:20.869604   83344 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 19:04:20.869654   83344 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem (1082 bytes)
	I1206 19:04:20.869691   83344 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem (1123 bytes)
	I1206 19:04:20.869725   83344 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem (1679 bytes)
	I1206 19:04:20.869787   83344 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem (1708 bytes)
	I1206 19:04:20.869830   83344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834.pem -> /usr/share/ca-certificates/70834.pem
	I1206 19:04:20.869852   83344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem -> /usr/share/ca-certificates/708342.pem
	I1206 19:04:20.869871   83344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:04:20.870456   83344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 19:04:20.896967   83344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 19:04:20.921603   83344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 19:04:20.947682   83344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 19:04:20.973586   83344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834.pem --> /usr/share/ca-certificates/70834.pem (1338 bytes)
	I1206 19:04:20.999331   83344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem --> /usr/share/ca-certificates/708342.pem (1708 bytes)
	I1206 19:04:21.022710   83344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 19:04:21.046299   83344 ssh_runner.go:195] Run: openssl version
	I1206 19:04:21.052003   83344 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1206 19:04:21.052076   83344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/70834.pem && ln -fs /usr/share/ca-certificates/70834.pem /etc/ssl/certs/70834.pem"
	I1206 19:04:21.061384   83344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/70834.pem
	I1206 19:04:21.067613   83344 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  6 18:50 /usr/share/ca-certificates/70834.pem
	I1206 19:04:21.067640   83344 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  6 18:50 /usr/share/ca-certificates/70834.pem
	I1206 19:04:21.067676   83344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/70834.pem
	I1206 19:04:21.073116   83344 command_runner.go:130] > 51391683
	I1206 19:04:21.073185   83344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/70834.pem /etc/ssl/certs/51391683.0"
	I1206 19:04:21.082350   83344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/708342.pem && ln -fs /usr/share/ca-certificates/708342.pem /etc/ssl/certs/708342.pem"
	I1206 19:04:21.091380   83344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/708342.pem
	I1206 19:04:21.095541   83344 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  6 18:50 /usr/share/ca-certificates/708342.pem
	I1206 19:04:21.095754   83344 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  6 18:50 /usr/share/ca-certificates/708342.pem
	I1206 19:04:21.095822   83344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/708342.pem
	I1206 19:04:21.100663   83344 command_runner.go:130] > 3ec20f2e
	I1206 19:04:21.101054   83344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/708342.pem /etc/ssl/certs/3ec20f2e.0"
	I1206 19:04:21.110092   83344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1206 19:04:21.119115   83344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:04:21.123293   83344 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  6 18:41 /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:04:21.123411   83344 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  6 18:41 /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:04:21.123455   83344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:04:21.128496   83344 command_runner.go:130] > b5213941
	I1206 19:04:21.128565   83344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1206 19:04:21.137804   83344 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1206 19:04:21.141506   83344 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1206 19:04:21.141750   83344 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1206 19:04:21.141879   83344 ssh_runner.go:195] Run: crio config
	I1206 19:04:21.192456   83344 command_runner.go:130] ! time="2023-12-06 19:04:21.181008046Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1206 19:04:21.192506   83344 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1206 19:04:21.203519   83344 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1206 19:04:21.203547   83344 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1206 19:04:21.203557   83344 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1206 19:04:21.203562   83344 command_runner.go:130] > #
	I1206 19:04:21.203570   83344 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1206 19:04:21.203580   83344 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1206 19:04:21.203589   83344 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1206 19:04:21.203600   83344 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1206 19:04:21.203610   83344 command_runner.go:130] > # reload'.
	I1206 19:04:21.203622   83344 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1206 19:04:21.203636   83344 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1206 19:04:21.203650   83344 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1206 19:04:21.203676   83344 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1206 19:04:21.203686   83344 command_runner.go:130] > [crio]
	I1206 19:04:21.203697   83344 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1206 19:04:21.203707   83344 command_runner.go:130] > # containers images, in this directory.
	I1206 19:04:21.203719   83344 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1206 19:04:21.203736   83344 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1206 19:04:21.203748   83344 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1206 19:04:21.203766   83344 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1206 19:04:21.203779   83344 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1206 19:04:21.203788   83344 command_runner.go:130] > storage_driver = "overlay"
	I1206 19:04:21.203798   83344 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1206 19:04:21.203812   83344 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1206 19:04:21.203823   83344 command_runner.go:130] > storage_option = [
	I1206 19:04:21.203832   83344 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1206 19:04:21.203841   83344 command_runner.go:130] > ]
	I1206 19:04:21.203852   83344 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1206 19:04:21.203869   83344 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1206 19:04:21.203879   83344 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1206 19:04:21.203892   83344 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1206 19:04:21.203906   83344 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1206 19:04:21.203918   83344 command_runner.go:130] > # always happen on a node reboot
	I1206 19:04:21.203930   83344 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1206 19:04:21.203944   83344 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1206 19:04:21.203957   83344 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1206 19:04:21.203974   83344 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1206 19:04:21.203986   83344 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1206 19:04:21.204003   83344 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1206 19:04:21.204019   83344 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1206 19:04:21.204030   83344 command_runner.go:130] > # internal_wipe = true
	I1206 19:04:21.204044   83344 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1206 19:04:21.204058   83344 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1206 19:04:21.204071   83344 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1206 19:04:21.204084   83344 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1206 19:04:21.204098   83344 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1206 19:04:21.204108   83344 command_runner.go:130] > [crio.api]
	I1206 19:04:21.204119   83344 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1206 19:04:21.204131   83344 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1206 19:04:21.204144   83344 command_runner.go:130] > # IP address on which the stream server will listen.
	I1206 19:04:21.204153   83344 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1206 19:04:21.204167   83344 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1206 19:04:21.204180   83344 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1206 19:04:21.204190   83344 command_runner.go:130] > # stream_port = "0"
	I1206 19:04:21.204203   83344 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1206 19:04:21.204214   83344 command_runner.go:130] > # stream_enable_tls = false
	I1206 19:04:21.204228   83344 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1206 19:04:21.204237   83344 command_runner.go:130] > # stream_idle_timeout = ""
	I1206 19:04:21.204252   83344 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1206 19:04:21.204266   83344 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1206 19:04:21.204281   83344 command_runner.go:130] > # minutes.
	I1206 19:04:21.204291   83344 command_runner.go:130] > # stream_tls_cert = ""
	I1206 19:04:21.204304   83344 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1206 19:04:21.204318   83344 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1206 19:04:21.204327   83344 command_runner.go:130] > # stream_tls_key = ""
	I1206 19:04:21.204341   83344 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1206 19:04:21.204356   83344 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1206 19:04:21.204369   83344 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1206 19:04:21.204379   83344 command_runner.go:130] > # stream_tls_ca = ""
	I1206 19:04:21.204393   83344 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1206 19:04:21.204404   83344 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1206 19:04:21.204417   83344 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1206 19:04:21.204429   83344 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1206 19:04:21.204457   83344 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1206 19:04:21.204470   83344 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1206 19:04:21.204480   83344 command_runner.go:130] > [crio.runtime]
	I1206 19:04:21.204494   83344 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1206 19:04:21.204504   83344 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1206 19:04:21.204515   83344 command_runner.go:130] > # "nofile=1024:2048"
	I1206 19:04:21.204530   83344 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1206 19:04:21.204540   83344 command_runner.go:130] > # default_ulimits = [
	I1206 19:04:21.204548   83344 command_runner.go:130] > # ]
	I1206 19:04:21.204559   83344 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1206 19:04:21.204570   83344 command_runner.go:130] > # no_pivot = false
	I1206 19:04:21.204584   83344 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1206 19:04:21.204598   83344 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1206 19:04:21.204610   83344 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1206 19:04:21.204623   83344 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1206 19:04:21.204636   83344 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1206 19:04:21.204651   83344 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1206 19:04:21.204663   83344 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1206 19:04:21.204674   83344 command_runner.go:130] > # Cgroup setting for conmon
	I1206 19:04:21.204689   83344 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1206 19:04:21.204698   83344 command_runner.go:130] > conmon_cgroup = "pod"
	I1206 19:04:21.204712   83344 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1206 19:04:21.204724   83344 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1206 19:04:21.204739   83344 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1206 19:04:21.204749   83344 command_runner.go:130] > conmon_env = [
	I1206 19:04:21.204762   83344 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1206 19:04:21.204771   83344 command_runner.go:130] > ]
	I1206 19:04:21.204779   83344 command_runner.go:130] > # Additional environment variables to set for all the
	I1206 19:04:21.204788   83344 command_runner.go:130] > # containers. These are overridden if set in the
	I1206 19:04:21.204803   83344 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1206 19:04:21.204813   83344 command_runner.go:130] > # default_env = [
	I1206 19:04:21.204822   83344 command_runner.go:130] > # ]
	I1206 19:04:21.204833   83344 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1206 19:04:21.204843   83344 command_runner.go:130] > # selinux = false
	I1206 19:04:21.204857   83344 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1206 19:04:21.204871   83344 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1206 19:04:21.204883   83344 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1206 19:04:21.204893   83344 command_runner.go:130] > # seccomp_profile = ""
	I1206 19:04:21.204907   83344 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1206 19:04:21.204921   83344 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1206 19:04:21.204935   83344 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1206 19:04:21.204946   83344 command_runner.go:130] > # which might increase security.
	I1206 19:04:21.204954   83344 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1206 19:04:21.204966   83344 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1206 19:04:21.204980   83344 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1206 19:04:21.204994   83344 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1206 19:04:21.205008   83344 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1206 19:04:21.205021   83344 command_runner.go:130] > # This option supports live configuration reload.
	I1206 19:04:21.205033   83344 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1206 19:04:21.205046   83344 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1206 19:04:21.205055   83344 command_runner.go:130] > # the cgroup blockio controller.
	I1206 19:04:21.205065   83344 command_runner.go:130] > # blockio_config_file = ""
	I1206 19:04:21.205080   83344 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1206 19:04:21.205090   83344 command_runner.go:130] > # irqbalance daemon.
	I1206 19:04:21.205103   83344 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1206 19:04:21.205117   83344 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1206 19:04:21.205129   83344 command_runner.go:130] > # This option supports live configuration reload.
	I1206 19:04:21.205136   83344 command_runner.go:130] > # rdt_config_file = ""
	I1206 19:04:21.205150   83344 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1206 19:04:21.205160   83344 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1206 19:04:21.205172   83344 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1206 19:04:21.205183   83344 command_runner.go:130] > # separate_pull_cgroup = ""
	I1206 19:04:21.205195   83344 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1206 19:04:21.205209   83344 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1206 19:04:21.205218   83344 command_runner.go:130] > # will be added.
	I1206 19:04:21.205227   83344 command_runner.go:130] > # default_capabilities = [
	I1206 19:04:21.205246   83344 command_runner.go:130] > # 	"CHOWN",
	I1206 19:04:21.205257   83344 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1206 19:04:21.205266   83344 command_runner.go:130] > # 	"FSETID",
	I1206 19:04:21.205278   83344 command_runner.go:130] > # 	"FOWNER",
	I1206 19:04:21.205288   83344 command_runner.go:130] > # 	"SETGID",
	I1206 19:04:21.205297   83344 command_runner.go:130] > # 	"SETUID",
	I1206 19:04:21.205306   83344 command_runner.go:130] > # 	"SETPCAP",
	I1206 19:04:21.205314   83344 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1206 19:04:21.205324   83344 command_runner.go:130] > # 	"KILL",
	I1206 19:04:21.205333   83344 command_runner.go:130] > # ]
	I1206 19:04:21.205345   83344 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1206 19:04:21.205358   83344 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1206 19:04:21.205369   83344 command_runner.go:130] > # default_sysctls = [
	I1206 19:04:21.205376   83344 command_runner.go:130] > # ]
	I1206 19:04:21.205388   83344 command_runner.go:130] > # List of devices on the host that a
	I1206 19:04:21.205399   83344 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1206 19:04:21.205410   83344 command_runner.go:130] > # allowed_devices = [
	I1206 19:04:21.205420   83344 command_runner.go:130] > # 	"/dev/fuse",
	I1206 19:04:21.205426   83344 command_runner.go:130] > # ]
	I1206 19:04:21.205439   83344 command_runner.go:130] > # List of additional devices. specified as
	I1206 19:04:21.205455   83344 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1206 19:04:21.205467   83344 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1206 19:04:21.205495   83344 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1206 19:04:21.205506   83344 command_runner.go:130] > # additional_devices = [
	I1206 19:04:21.205518   83344 command_runner.go:130] > # ]
	I1206 19:04:21.205528   83344 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1206 19:04:21.205538   83344 command_runner.go:130] > # cdi_spec_dirs = [
	I1206 19:04:21.205548   83344 command_runner.go:130] > # 	"/etc/cdi",
	I1206 19:04:21.205556   83344 command_runner.go:130] > # 	"/var/run/cdi",
	I1206 19:04:21.205565   83344 command_runner.go:130] > # ]
	I1206 19:04:21.205576   83344 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1206 19:04:21.205590   83344 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1206 19:04:21.205600   83344 command_runner.go:130] > # Defaults to false.
	I1206 19:04:21.205612   83344 command_runner.go:130] > # device_ownership_from_security_context = false
	I1206 19:04:21.205627   83344 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1206 19:04:21.205641   83344 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1206 19:04:21.205651   83344 command_runner.go:130] > # hooks_dir = [
	I1206 19:04:21.205660   83344 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1206 19:04:21.205668   83344 command_runner.go:130] > # ]
	I1206 19:04:21.205680   83344 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1206 19:04:21.205694   83344 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1206 19:04:21.205706   83344 command_runner.go:130] > # its default mounts from the following two files:
	I1206 19:04:21.205715   83344 command_runner.go:130] > #
	I1206 19:04:21.205727   83344 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1206 19:04:21.205741   83344 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1206 19:04:21.205754   83344 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1206 19:04:21.205763   83344 command_runner.go:130] > #
	I1206 19:04:21.205775   83344 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1206 19:04:21.205789   83344 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1206 19:04:21.205803   83344 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1206 19:04:21.205815   83344 command_runner.go:130] > #      only add mounts it finds in this file.
	I1206 19:04:21.205824   83344 command_runner.go:130] > #
	I1206 19:04:21.205833   83344 command_runner.go:130] > # default_mounts_file = ""
	I1206 19:04:21.205846   83344 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1206 19:04:21.205861   83344 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1206 19:04:21.205872   83344 command_runner.go:130] > pids_limit = 1024
	I1206 19:04:21.205886   83344 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1206 19:04:21.205900   83344 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1206 19:04:21.205914   83344 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1206 19:04:21.205931   83344 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1206 19:04:21.205942   83344 command_runner.go:130] > # log_size_max = -1
	I1206 19:04:21.205956   83344 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1206 19:04:21.205967   83344 command_runner.go:130] > # log_to_journald = false
	I1206 19:04:21.205979   83344 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1206 19:04:21.205991   83344 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1206 19:04:21.206002   83344 command_runner.go:130] > # Path to directory for container attach sockets.
	I1206 19:04:21.206012   83344 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1206 19:04:21.206025   83344 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1206 19:04:21.206036   83344 command_runner.go:130] > # bind_mount_prefix = ""
	I1206 19:04:21.206047   83344 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1206 19:04:21.206057   83344 command_runner.go:130] > # read_only = false
	I1206 19:04:21.206070   83344 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1206 19:04:21.206084   83344 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1206 19:04:21.206094   83344 command_runner.go:130] > # live configuration reload.
	I1206 19:04:21.206102   83344 command_runner.go:130] > # log_level = "info"
	I1206 19:04:21.206116   83344 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1206 19:04:21.206128   83344 command_runner.go:130] > # This option supports live configuration reload.
	I1206 19:04:21.206139   83344 command_runner.go:130] > # log_filter = ""
	I1206 19:04:21.206153   83344 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1206 19:04:21.206167   83344 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1206 19:04:21.206176   83344 command_runner.go:130] > # separated by comma.
	I1206 19:04:21.206184   83344 command_runner.go:130] > # uid_mappings = ""
	I1206 19:04:21.206197   83344 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1206 19:04:21.206211   83344 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1206 19:04:21.206222   83344 command_runner.go:130] > # separated by comma.
	I1206 19:04:21.206230   83344 command_runner.go:130] > # gid_mappings = ""
	I1206 19:04:21.206244   83344 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1206 19:04:21.206258   83344 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1206 19:04:21.206276   83344 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1206 19:04:21.206287   83344 command_runner.go:130] > # minimum_mappable_uid = -1
	I1206 19:04:21.206298   83344 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1206 19:04:21.206312   83344 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1206 19:04:21.206326   83344 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1206 19:04:21.206337   83344 command_runner.go:130] > # minimum_mappable_gid = -1
	I1206 19:04:21.206351   83344 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1206 19:04:21.206365   83344 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1206 19:04:21.206378   83344 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1206 19:04:21.206389   83344 command_runner.go:130] > # ctr_stop_timeout = 30
	I1206 19:04:21.206400   83344 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1206 19:04:21.206414   83344 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1206 19:04:21.206426   83344 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1206 19:04:21.206438   83344 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1206 19:04:21.206451   83344 command_runner.go:130] > drop_infra_ctr = false
	I1206 19:04:21.206465   83344 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1206 19:04:21.206478   83344 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1206 19:04:21.206494   83344 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1206 19:04:21.206503   83344 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1206 19:04:21.206515   83344 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1206 19:04:21.206527   83344 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1206 19:04:21.206538   83344 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1206 19:04:21.206553   83344 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1206 19:04:21.206564   83344 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1206 19:04:21.206576   83344 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1206 19:04:21.206590   83344 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1206 19:04:21.206604   83344 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1206 19:04:21.206616   83344 command_runner.go:130] > # default_runtime = "runc"
	I1206 19:04:21.206629   83344 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1206 19:04:21.206645   83344 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1206 19:04:21.206663   83344 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1206 19:04:21.206675   83344 command_runner.go:130] > # creation as a file is not desired either.
	I1206 19:04:21.206692   83344 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1206 19:04:21.206704   83344 command_runner.go:130] > # the hostname is being managed dynamically.
	I1206 19:04:21.206716   83344 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1206 19:04:21.206726   83344 command_runner.go:130] > # ]
	I1206 19:04:21.206743   83344 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1206 19:04:21.206757   83344 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1206 19:04:21.206768   83344 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1206 19:04:21.206778   83344 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1206 19:04:21.206784   83344 command_runner.go:130] > #
	I1206 19:04:21.206797   83344 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1206 19:04:21.206809   83344 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1206 19:04:21.206820   83344 command_runner.go:130] > #  runtime_type = "oci"
	I1206 19:04:21.206832   83344 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1206 19:04:21.206844   83344 command_runner.go:130] > #  privileged_without_host_devices = false
	I1206 19:04:21.206855   83344 command_runner.go:130] > #  allowed_annotations = []
	I1206 19:04:21.206862   83344 command_runner.go:130] > # Where:
	I1206 19:04:21.206872   83344 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1206 19:04:21.206887   83344 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1206 19:04:21.206901   83344 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1206 19:04:21.206916   83344 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1206 19:04:21.206926   83344 command_runner.go:130] > #   in $PATH.
	I1206 19:04:21.206938   83344 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1206 19:04:21.206950   83344 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1206 19:04:21.206965   83344 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1206 19:04:21.206975   83344 command_runner.go:130] > #   state.
	I1206 19:04:21.206987   83344 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1206 19:04:21.207001   83344 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1206 19:04:21.207012   83344 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1206 19:04:21.207025   83344 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1206 19:04:21.207039   83344 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1206 19:04:21.207054   83344 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1206 19:04:21.207065   83344 command_runner.go:130] > #   The currently recognized values are:
	I1206 19:04:21.207080   83344 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1206 19:04:21.207096   83344 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1206 19:04:21.207111   83344 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1206 19:04:21.207125   83344 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1206 19:04:21.207141   83344 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1206 19:04:21.207155   83344 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1206 19:04:21.207169   83344 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1206 19:04:21.207184   83344 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1206 19:04:21.207195   83344 command_runner.go:130] > #   should be moved to the container's cgroup
	I1206 19:04:21.207205   83344 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1206 19:04:21.207216   83344 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1206 19:04:21.207227   83344 command_runner.go:130] > runtime_type = "oci"
	I1206 19:04:21.207238   83344 command_runner.go:130] > runtime_root = "/run/runc"
	I1206 19:04:21.207248   83344 command_runner.go:130] > runtime_config_path = ""
	I1206 19:04:21.207256   83344 command_runner.go:130] > monitor_path = ""
	I1206 19:04:21.207267   83344 command_runner.go:130] > monitor_cgroup = ""
	I1206 19:04:21.207282   83344 command_runner.go:130] > monitor_exec_cgroup = ""
	I1206 19:04:21.207294   83344 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1206 19:04:21.207304   83344 command_runner.go:130] > # running containers
	I1206 19:04:21.207316   83344 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1206 19:04:21.207330   83344 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1206 19:04:21.207363   83344 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1206 19:04:21.207376   83344 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1206 19:04:21.207388   83344 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1206 19:04:21.207400   83344 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1206 19:04:21.207411   83344 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1206 19:04:21.207423   83344 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1206 19:04:21.207436   83344 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1206 19:04:21.207448   83344 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1206 19:04:21.207462   83344 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1206 19:04:21.207473   83344 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1206 19:04:21.207488   83344 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1206 19:04:21.207504   83344 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1206 19:04:21.207520   83344 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1206 19:04:21.207533   83344 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1206 19:04:21.207552   83344 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1206 19:04:21.207569   83344 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1206 19:04:21.207582   83344 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1206 19:04:21.207598   83344 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1206 19:04:21.207608   83344 command_runner.go:130] > # Example:
	I1206 19:04:21.207620   83344 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1206 19:04:21.207629   83344 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1206 19:04:21.207638   83344 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1206 19:04:21.207651   83344 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1206 19:04:21.207661   83344 command_runner.go:130] > # cpuset = 0
	I1206 19:04:21.207671   83344 command_runner.go:130] > # cpushares = "0-1"
	I1206 19:04:21.207679   83344 command_runner.go:130] > # Where:
	I1206 19:04:21.207691   83344 command_runner.go:130] > # The workload name is workload-type.
	I1206 19:04:21.207707   83344 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1206 19:04:21.207719   83344 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1206 19:04:21.207730   83344 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1206 19:04:21.207747   83344 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1206 19:04:21.207760   83344 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1206 19:04:21.207768   83344 command_runner.go:130] > # 
	I1206 19:04:21.207782   83344 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1206 19:04:21.207791   83344 command_runner.go:130] > #
	I1206 19:04:21.207802   83344 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1206 19:04:21.207816   83344 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1206 19:04:21.207830   83344 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1206 19:04:21.207844   83344 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1206 19:04:21.207857   83344 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1206 19:04:21.207867   83344 command_runner.go:130] > [crio.image]
	I1206 19:04:21.207879   83344 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1206 19:04:21.207891   83344 command_runner.go:130] > # default_transport = "docker://"
	I1206 19:04:21.207905   83344 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1206 19:04:21.207920   83344 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1206 19:04:21.207931   83344 command_runner.go:130] > # global_auth_file = ""
	I1206 19:04:21.207941   83344 command_runner.go:130] > # The image used to instantiate infra containers.
	I1206 19:04:21.207953   83344 command_runner.go:130] > # This option supports live configuration reload.
	I1206 19:04:21.207965   83344 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1206 19:04:21.207979   83344 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1206 19:04:21.207993   83344 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1206 19:04:21.208005   83344 command_runner.go:130] > # This option supports live configuration reload.
	I1206 19:04:21.208013   83344 command_runner.go:130] > # pause_image_auth_file = ""
	I1206 19:04:21.208027   83344 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1206 19:04:21.208041   83344 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1206 19:04:21.208055   83344 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1206 19:04:21.208069   83344 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1206 19:04:21.208080   83344 command_runner.go:130] > # pause_command = "/pause"
	I1206 19:04:21.208094   83344 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1206 19:04:21.208108   83344 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1206 19:04:21.208123   83344 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1206 19:04:21.208137   83344 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1206 19:04:21.208150   83344 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1206 19:04:21.208161   83344 command_runner.go:130] > # signature_policy = ""
	I1206 19:04:21.208172   83344 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1206 19:04:21.208186   83344 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1206 19:04:21.208196   83344 command_runner.go:130] > # changing them here.
	I1206 19:04:21.208203   83344 command_runner.go:130] > # insecure_registries = [
	I1206 19:04:21.208213   83344 command_runner.go:130] > # ]
	I1206 19:04:21.208229   83344 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1206 19:04:21.208242   83344 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1206 19:04:21.208253   83344 command_runner.go:130] > # image_volumes = "mkdir"
	I1206 19:04:21.208281   83344 command_runner.go:130] > # Temporary directory to use for storing big files
	I1206 19:04:21.208292   83344 command_runner.go:130] > # big_files_temporary_dir = ""
	I1206 19:04:21.208303   83344 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1206 19:04:21.208313   83344 command_runner.go:130] > # CNI plugins.
	I1206 19:04:21.208323   83344 command_runner.go:130] > [crio.network]
	I1206 19:04:21.208334   83344 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1206 19:04:21.208349   83344 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1206 19:04:21.208360   83344 command_runner.go:130] > # cni_default_network = ""
	I1206 19:04:21.208371   83344 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1206 19:04:21.208381   83344 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1206 19:04:21.208392   83344 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1206 19:04:21.208402   83344 command_runner.go:130] > # plugin_dirs = [
	I1206 19:04:21.208413   83344 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1206 19:04:21.208422   83344 command_runner.go:130] > # ]
	I1206 19:04:21.208433   83344 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1206 19:04:21.208443   83344 command_runner.go:130] > [crio.metrics]
	I1206 19:04:21.208453   83344 command_runner.go:130] > # Globally enable or disable metrics support.
	I1206 19:04:21.208463   83344 command_runner.go:130] > enable_metrics = true
	I1206 19:04:21.208472   83344 command_runner.go:130] > # Specify enabled metrics collectors.
	I1206 19:04:21.208484   83344 command_runner.go:130] > # Per default all metrics are enabled.
	I1206 19:04:21.208498   83344 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1206 19:04:21.208512   83344 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1206 19:04:21.208525   83344 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1206 19:04:21.208536   83344 command_runner.go:130] > # metrics_collectors = [
	I1206 19:04:21.208546   83344 command_runner.go:130] > # 	"operations",
	I1206 19:04:21.208555   83344 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1206 19:04:21.208567   83344 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1206 19:04:21.208579   83344 command_runner.go:130] > # 	"operations_errors",
	I1206 19:04:21.208591   83344 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1206 19:04:21.208604   83344 command_runner.go:130] > # 	"image_pulls_by_name",
	I1206 19:04:21.208615   83344 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1206 19:04:21.208626   83344 command_runner.go:130] > # 	"image_pulls_failures",
	I1206 19:04:21.208634   83344 command_runner.go:130] > # 	"image_pulls_successes",
	I1206 19:04:21.208642   83344 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1206 19:04:21.208652   83344 command_runner.go:130] > # 	"image_layer_reuse",
	I1206 19:04:21.208664   83344 command_runner.go:130] > # 	"containers_oom_total",
	I1206 19:04:21.208674   83344 command_runner.go:130] > # 	"containers_oom",
	I1206 19:04:21.208683   83344 command_runner.go:130] > # 	"processes_defunct",
	I1206 19:04:21.208694   83344 command_runner.go:130] > # 	"operations_total",
	I1206 19:04:21.208702   83344 command_runner.go:130] > # 	"operations_latency_seconds",
	I1206 19:04:21.208714   83344 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1206 19:04:21.208724   83344 command_runner.go:130] > # 	"operations_errors_total",
	I1206 19:04:21.208733   83344 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1206 19:04:21.208744   83344 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1206 19:04:21.208756   83344 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1206 19:04:21.208767   83344 command_runner.go:130] > # 	"image_pulls_success_total",
	I1206 19:04:21.208778   83344 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1206 19:04:21.208787   83344 command_runner.go:130] > # 	"containers_oom_count_total",
	I1206 19:04:21.208796   83344 command_runner.go:130] > # ]
	I1206 19:04:21.208807   83344 command_runner.go:130] > # The port on which the metrics server will listen.
	I1206 19:04:21.208816   83344 command_runner.go:130] > # metrics_port = 9090
	I1206 19:04:21.208825   83344 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1206 19:04:21.208836   83344 command_runner.go:130] > # metrics_socket = ""
	I1206 19:04:21.208848   83344 command_runner.go:130] > # The certificate for the secure metrics server.
	I1206 19:04:21.208862   83344 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1206 19:04:21.208877   83344 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1206 19:04:21.208889   83344 command_runner.go:130] > # certificate on any modification event.
	I1206 19:04:21.208899   83344 command_runner.go:130] > # metrics_cert = ""
	I1206 19:04:21.208908   83344 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1206 19:04:21.208921   83344 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1206 19:04:21.208932   83344 command_runner.go:130] > # metrics_key = ""
	I1206 19:04:21.208945   83344 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1206 19:04:21.208956   83344 command_runner.go:130] > [crio.tracing]
	I1206 19:04:21.208968   83344 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1206 19:04:21.208979   83344 command_runner.go:130] > # enable_tracing = false
	I1206 19:04:21.208989   83344 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1206 19:04:21.208999   83344 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1206 19:04:21.209011   83344 command_runner.go:130] > # Number of samples to collect per million spans.
	I1206 19:04:21.209024   83344 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1206 19:04:21.209038   83344 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1206 19:04:21.209048   83344 command_runner.go:130] > [crio.stats]
	I1206 19:04:21.209062   83344 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1206 19:04:21.209074   83344 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1206 19:04:21.209082   83344 command_runner.go:130] > # stats_collection_period = 0
	I1206 19:04:21.209152   83344 cni.go:84] Creating CNI manager for ""
	I1206 19:04:21.209163   83344 cni.go:136] 2 nodes found, recommending kindnet
	I1206 19:04:21.209187   83344 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1206 19:04:21.209218   83344 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.6 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-593099 NodeName:multinode-593099-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.125"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.6 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 19:04:21.209378   83344 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.6
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-593099-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.6
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.125"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 19:04:21.209444   83344 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-593099-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-593099 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1206 19:04:21.209512   83344 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1206 19:04:21.218046   83344 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	I1206 19:04:21.218145   83344 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I1206 19:04:21.218214   83344 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I1206 19:04:21.227210   83344 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/17740-63652/.minikube/cache/linux/amd64/v1.28.4/kubeadm
	I1206 19:04:21.227221   83344 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I1206 19:04:21.227242   83344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/linux/amd64/v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I1206 19:04:21.227256   83344 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/17740-63652/.minikube/cache/linux/amd64/v1.28.4/kubelet
	I1206 19:04:21.227306   83344 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I1206 19:04:21.231556   83344 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I1206 19:04:21.231598   83344 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I1206 19:04:21.231616   83344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/cache/linux/amd64/v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I1206 19:04:22.294758   83344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 19:04:22.310657   83344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/linux/amd64/v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I1206 19:04:22.310742   83344 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I1206 19:04:22.315321   83344 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I1206 19:04:22.315369   83344 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I1206 19:04:22.315396   83344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/cache/linux/amd64/v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I1206 19:04:24.558572   83344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/linux/amd64/v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I1206 19:04:24.558658   83344 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I1206 19:04:24.564045   83344 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I1206 19:04:24.564088   83344 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I1206 19:04:24.564112   83344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/cache/linux/amd64/v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I1206 19:04:24.794136   83344 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1206 19:04:24.803493   83344 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1206 19:04:24.819220   83344 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 19:04:24.835372   83344 ssh_runner.go:195] Run: grep 192.168.39.125	control-plane.minikube.internal$ /etc/hosts
	I1206 19:04:24.839510   83344 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.125	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 19:04:24.852141   83344 host.go:66] Checking if "multinode-593099" exists ...
	I1206 19:04:24.852414   83344 config.go:182] Loaded profile config "multinode-593099": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 19:04:24.852605   83344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:04:24.852657   83344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:04:24.867347   83344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37639
	I1206 19:04:24.867861   83344 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:04:24.868323   83344 main.go:141] libmachine: Using API Version  1
	I1206 19:04:24.868354   83344 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:04:24.868675   83344 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:04:24.868892   83344 main.go:141] libmachine: (multinode-593099) Calling .DriverName
	I1206 19:04:24.869053   83344 start.go:304] JoinCluster: &{Name:multinode-593099 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-593099 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.6 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 19:04:24.869163   83344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1206 19:04:24.869187   83344 main.go:141] libmachine: (multinode-593099) Calling .GetSSHHostname
	I1206 19:04:24.872182   83344 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:04:24.872606   83344 main.go:141] libmachine: (multinode-593099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:c6", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:03:01 +0000 UTC Type:0 Mac:52:54:00:37:16:c6 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:multinode-593099 Clientid:01:52:54:00:37:16:c6}
	I1206 19:04:24.872635   83344 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined IP address 192.168.39.125 and MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:04:24.872793   83344 main.go:141] libmachine: (multinode-593099) Calling .GetSSHPort
	I1206 19:04:24.872949   83344 main.go:141] libmachine: (multinode-593099) Calling .GetSSHKeyPath
	I1206 19:04:24.873116   83344 main.go:141] libmachine: (multinode-593099) Calling .GetSSHUsername
	I1206 19:04:24.873270   83344 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/multinode-593099/id_rsa Username:docker}
	I1206 19:04:25.045582   83344 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token dhwnux.4bqle4ja21x9bs0n --discovery-token-ca-cert-hash sha256:3173d0205a58c67077c3594ee458dc14bc41fcece32682cbfd9ea1126e12b817 
	I1206 19:04:25.045765   83344 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.6 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1206 19:04:25.045814   83344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dhwnux.4bqle4ja21x9bs0n --discovery-token-ca-cert-hash sha256:3173d0205a58c67077c3594ee458dc14bc41fcece32682cbfd9ea1126e12b817 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-593099-m02"
	I1206 19:04:25.092459   83344 command_runner.go:130] ! W1206 19:04:25.084624     818 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1206 19:04:25.224365   83344 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 19:04:27.423148   83344 command_runner.go:130] > [preflight] Running pre-flight checks
	I1206 19:04:27.423180   83344 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1206 19:04:27.423193   83344 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1206 19:04:27.423204   83344 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 19:04:27.423216   83344 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 19:04:27.423224   83344 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1206 19:04:27.423242   83344 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1206 19:04:27.423260   83344 command_runner.go:130] > This node has joined the cluster:
	I1206 19:04:27.423274   83344 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1206 19:04:27.423286   83344 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1206 19:04:27.423300   83344 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1206 19:04:27.423336   83344 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dhwnux.4bqle4ja21x9bs0n --discovery-token-ca-cert-hash sha256:3173d0205a58c67077c3594ee458dc14bc41fcece32682cbfd9ea1126e12b817 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-593099-m02": (2.37750553s)
	I1206 19:04:27.423360   83344 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1206 19:04:27.551730   83344 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I1206 19:04:27.662986   83344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=31a3600ce72029d920a55140bbc6d0705e357503 minikube.k8s.io/name=multinode-593099 minikube.k8s.io/updated_at=2023_12_06T19_04_27_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:04:27.772944   83344 command_runner.go:130] > node/multinode-593099-m02 labeled
	I1206 19:04:27.775997   83344 start.go:306] JoinCluster complete in 2.906948132s
	I1206 19:04:27.776022   83344 cni.go:84] Creating CNI manager for ""
	I1206 19:04:27.776028   83344 cni.go:136] 2 nodes found, recommending kindnet
	I1206 19:04:27.776084   83344 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1206 19:04:27.789348   83344 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1206 19:04:27.789377   83344 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1206 19:04:27.789384   83344 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1206 19:04:27.789390   83344 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1206 19:04:27.789395   83344 command_runner.go:130] > Access: 2023-12-06 19:02:59.161576619 +0000
	I1206 19:04:27.789400   83344 command_runner.go:130] > Modify: 2023-12-01 05:15:19.000000000 +0000
	I1206 19:04:27.789405   83344 command_runner.go:130] > Change: 2023-12-06 19:02:57.329576619 +0000
	I1206 19:04:27.789408   83344 command_runner.go:130] >  Birth: -
	I1206 19:04:27.789460   83344 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1206 19:04:27.789474   83344 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1206 19:04:27.824513   83344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1206 19:04:28.135599   83344 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1206 19:04:28.141423   83344 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1206 19:04:28.145421   83344 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1206 19:04:28.158981   83344 command_runner.go:130] > daemonset.apps/kindnet configured
	I1206 19:04:28.162660   83344 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17740-63652/kubeconfig
	I1206 19:04:28.162930   83344 kapi.go:59] client config for multinode-593099: &rest.Config{Host:"https://192.168.39.125:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/client.crt", KeyFile:"/home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/client.key", CAFile:"/home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 19:04:28.163410   83344 round_trippers.go:463] GET https://192.168.39.125:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1206 19:04:28.163435   83344 round_trippers.go:469] Request Headers:
	I1206 19:04:28.163447   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:04:28.163456   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:04:28.166916   83344 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1206 19:04:28.166966   83344 round_trippers.go:577] Response Headers:
	I1206 19:04:28.166978   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:04:28.167007   83344 round_trippers.go:580]     Content-Length: 291
	I1206 19:04:28.167015   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:04:28 GMT
	I1206 19:04:28.167023   83344 round_trippers.go:580]     Audit-Id: 465e2d4d-de59-461f-b28d-6b2dfa00cb88
	I1206 19:04:28.167034   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:04:28.167044   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:04:28.167052   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:04:28.167087   83344 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"914591c0-c4d9-4bf1-b4d5-c7cbc3153364","resourceVersion":"403","creationTimestamp":"2023-12-06T19:03:30Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1206 19:04:28.167196   83344 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-593099" context rescaled to 1 replicas
	I1206 19:04:28.167227   83344 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.6 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1206 19:04:28.169182   83344 out.go:177] * Verifying Kubernetes components...
	I1206 19:04:28.170720   83344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 19:04:28.184080   83344 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17740-63652/kubeconfig
	I1206 19:04:28.184351   83344 kapi.go:59] client config for multinode-593099: &rest.Config{Host:"https://192.168.39.125:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/client.crt", KeyFile:"/home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/client.key", CAFile:"/home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 19:04:28.184589   83344 node_ready.go:35] waiting up to 6m0s for node "multinode-593099-m02" to be "Ready" ...
	I1206 19:04:28.184653   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099-m02
	I1206 19:04:28.184660   83344 round_trippers.go:469] Request Headers:
	I1206 19:04:28.184669   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:04:28.184676   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:04:28.187698   83344 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1206 19:04:28.187724   83344 round_trippers.go:577] Response Headers:
	I1206 19:04:28.187734   83344 round_trippers.go:580]     Content-Length: 4081
	I1206 19:04:28.187743   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:04:28 GMT
	I1206 19:04:28.187751   83344 round_trippers.go:580]     Audit-Id: 4ab6ee04-de89-4419-aeaa-1d42fabf118a
	I1206 19:04:28.187759   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:04:28.187767   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:04:28.187775   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:04:28.187783   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:04:28.187930   83344 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099-m02","uid":"4f57a17b-3ee2-40b9-bc65-252760c4ac03","resourceVersion":"455","creationTimestamp":"2023-12-06T19:04:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_06T19_04_27_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:04:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3057 chars]
	I1206 19:04:28.188316   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099-m02
	I1206 19:04:28.188333   83344 round_trippers.go:469] Request Headers:
	I1206 19:04:28.188344   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:04:28.188353   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:04:28.191350   83344 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:04:28.191378   83344 round_trippers.go:577] Response Headers:
	I1206 19:04:28.191387   83344 round_trippers.go:580]     Audit-Id: aa2472c3-60c6-4eab-b57c-2d067f960e5d
	I1206 19:04:28.191395   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:04:28.191403   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:04:28.191411   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:04:28.191419   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:04:28.191431   83344 round_trippers.go:580]     Content-Length: 4081
	I1206 19:04:28.191438   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:04:28 GMT
	I1206 19:04:28.191535   83344 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099-m02","uid":"4f57a17b-3ee2-40b9-bc65-252760c4ac03","resourceVersion":"455","creationTimestamp":"2023-12-06T19:04:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_06T19_04_27_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:04:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3057 chars]
	I1206 19:04:28.692629   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099-m02
	I1206 19:04:28.692655   83344 round_trippers.go:469] Request Headers:
	I1206 19:04:28.692666   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:04:28.692674   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:04:28.695993   83344 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1206 19:04:28.696021   83344 round_trippers.go:577] Response Headers:
	I1206 19:04:28.696028   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:04:28.696034   83344 round_trippers.go:580]     Content-Length: 4081
	I1206 19:04:28.696039   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:04:28 GMT
	I1206 19:04:28.696044   83344 round_trippers.go:580]     Audit-Id: 6c1775f2-2d03-4958-a0c6-8ed6d718ec65
	I1206 19:04:28.696052   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:04:28.696064   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:04:28.696082   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:04:28.696169   83344 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099-m02","uid":"4f57a17b-3ee2-40b9-bc65-252760c4ac03","resourceVersion":"455","creationTimestamp":"2023-12-06T19:04:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_06T19_04_27_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:04:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3057 chars]
	I1206 19:04:29.192805   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099-m02
	I1206 19:04:29.192833   83344 round_trippers.go:469] Request Headers:
	I1206 19:04:29.192844   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:04:29.192853   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:04:29.195650   83344 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:04:29.195685   83344 round_trippers.go:577] Response Headers:
	I1206 19:04:29.195696   83344 round_trippers.go:580]     Audit-Id: 0ec5f690-e200-4d1d-9252-57779c44473d
	I1206 19:04:29.195705   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:04:29.195714   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:04:29.195722   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:04:29.195735   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:04:29.195746   83344 round_trippers.go:580]     Content-Length: 4081
	I1206 19:04:29.195755   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:04:29 GMT
	I1206 19:04:29.195873   83344 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099-m02","uid":"4f57a17b-3ee2-40b9-bc65-252760c4ac03","resourceVersion":"455","creationTimestamp":"2023-12-06T19:04:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_06T19_04_27_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:04:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3057 chars]
	I1206 19:04:29.692311   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099-m02
	I1206 19:04:29.692345   83344 round_trippers.go:469] Request Headers:
	I1206 19:04:29.692357   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:04:29.692368   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:04:29.694984   83344 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:04:29.695020   83344 round_trippers.go:577] Response Headers:
	I1206 19:04:29.695031   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:04:29.695041   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:04:29.695051   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:04:29.695060   83344 round_trippers.go:580]     Content-Length: 4081
	I1206 19:04:29.695066   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:04:29 GMT
	I1206 19:04:29.695072   83344 round_trippers.go:580]     Audit-Id: 9fd368e5-9dd0-40ab-92ba-3851eabbf8e5
	I1206 19:04:29.695079   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:04:29.695143   83344 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099-m02","uid":"4f57a17b-3ee2-40b9-bc65-252760c4ac03","resourceVersion":"455","creationTimestamp":"2023-12-06T19:04:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_06T19_04_27_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:04:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3057 chars]
	I1206 19:04:30.192185   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099-m02
	I1206 19:04:30.192212   83344 round_trippers.go:469] Request Headers:
	I1206 19:04:30.192220   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:04:30.192226   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:04:30.194911   83344 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:04:30.194935   83344 round_trippers.go:577] Response Headers:
	I1206 19:04:30.194942   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:04:30.194948   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:04:30.194953   83344 round_trippers.go:580]     Content-Length: 4081
	I1206 19:04:30.194958   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:04:30 GMT
	I1206 19:04:30.194963   83344 round_trippers.go:580]     Audit-Id: eb2eb0ee-f753-4fd2-8ed8-d5cdbe8f06bd
	I1206 19:04:30.194968   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:04:30.194973   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:04:30.195046   83344 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099-m02","uid":"4f57a17b-3ee2-40b9-bc65-252760c4ac03","resourceVersion":"455","creationTimestamp":"2023-12-06T19:04:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_06T19_04_27_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:04:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3057 chars]
	I1206 19:04:30.195302   83344 node_ready.go:58] node "multinode-593099-m02" has status "Ready":"False"
	I1206 19:04:30.692630   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099-m02
	I1206 19:04:30.692656   83344 round_trippers.go:469] Request Headers:
	I1206 19:04:30.692667   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:04:30.692676   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:04:30.697217   83344 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1206 19:04:30.697251   83344 round_trippers.go:577] Response Headers:
	I1206 19:04:30.697262   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:04:30.697270   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:04:30.697279   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:04:30.697305   83344 round_trippers.go:580]     Content-Length: 4081
	I1206 19:04:30.697321   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:04:30 GMT
	I1206 19:04:30.697330   83344 round_trippers.go:580]     Audit-Id: d8a3a344-7efc-48d9-9ff9-3b0ead5cc0e2
	I1206 19:04:30.697343   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:04:30.697454   83344 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099-m02","uid":"4f57a17b-3ee2-40b9-bc65-252760c4ac03","resourceVersion":"455","creationTimestamp":"2023-12-06T19:04:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_06T19_04_27_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:04:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3057 chars]
	I1206 19:04:31.192587   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099-m02
	I1206 19:04:31.192614   83344 round_trippers.go:469] Request Headers:
	I1206 19:04:31.192631   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:04:31.192641   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:04:31.195544   83344 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:04:31.195572   83344 round_trippers.go:577] Response Headers:
	I1206 19:04:31.195583   83344 round_trippers.go:580]     Audit-Id: ec08d937-7bef-4943-8b09-e1b06dbfb356
	I1206 19:04:31.195591   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:04:31.195599   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:04:31.195607   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:04:31.195616   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:04:31.195628   83344 round_trippers.go:580]     Content-Length: 4081
	I1206 19:04:31.195636   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:04:31 GMT
	I1206 19:04:31.195816   83344 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099-m02","uid":"4f57a17b-3ee2-40b9-bc65-252760c4ac03","resourceVersion":"455","creationTimestamp":"2023-12-06T19:04:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_06T19_04_27_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:04:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3057 chars]
	I1206 19:04:31.692346   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099-m02
	I1206 19:04:31.692372   83344 round_trippers.go:469] Request Headers:
	I1206 19:04:31.692391   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:04:31.692397   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:04:31.695736   83344 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1206 19:04:31.695806   83344 round_trippers.go:577] Response Headers:
	I1206 19:04:31.695822   83344 round_trippers.go:580]     Audit-Id: 6acfd121-6827-4127-92ae-388de18e4950
	I1206 19:04:31.695831   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:04:31.695841   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:04:31.695850   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:04:31.695861   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:04:31.695870   83344 round_trippers.go:580]     Content-Length: 4081
	I1206 19:04:31.695880   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:04:31 GMT
	I1206 19:04:31.696017   83344 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099-m02","uid":"4f57a17b-3ee2-40b9-bc65-252760c4ac03","resourceVersion":"455","creationTimestamp":"2023-12-06T19:04:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_06T19_04_27_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:04:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3057 chars]
	I1206 19:04:32.192646   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099-m02
	I1206 19:04:32.192672   83344 round_trippers.go:469] Request Headers:
	I1206 19:04:32.192680   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:04:32.192686   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:04:32.195462   83344 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:04:32.195485   83344 round_trippers.go:577] Response Headers:
	I1206 19:04:32.195492   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:04:32 GMT
	I1206 19:04:32.195497   83344 round_trippers.go:580]     Audit-Id: 9498c138-2e58-4bf1-9558-fb11999c2ede
	I1206 19:04:32.195502   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:04:32.195508   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:04:32.195513   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:04:32.195518   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:04:32.195523   83344 round_trippers.go:580]     Content-Length: 4081
	I1206 19:04:32.195601   83344 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099-m02","uid":"4f57a17b-3ee2-40b9-bc65-252760c4ac03","resourceVersion":"455","creationTimestamp":"2023-12-06T19:04:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_06T19_04_27_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:04:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3057 chars]
	I1206 19:04:32.195842   83344 node_ready.go:58] node "multinode-593099-m02" has status "Ready":"False"
	I1206 19:04:32.692527   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099-m02
	I1206 19:04:32.692547   83344 round_trippers.go:469] Request Headers:
	I1206 19:04:32.692556   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:04:32.692562   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:04:32.695173   83344 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:04:32.695196   83344 round_trippers.go:577] Response Headers:
	I1206 19:04:32.695205   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:04:32.695213   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:04:32.695220   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:04:32.695232   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:04:32 GMT
	I1206 19:04:32.695241   83344 round_trippers.go:580]     Audit-Id: 33333ea8-4a99-4a8d-9888-6d56e2849f8d
	I1206 19:04:32.695250   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:04:32.695448   83344 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099-m02","uid":"4f57a17b-3ee2-40b9-bc65-252760c4ac03","resourceVersion":"473","creationTimestamp":"2023-12-06T19:04:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_06T19_04_27_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:04:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3166 chars]
	I1206 19:04:33.192174   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099-m02
	I1206 19:04:33.192201   83344 round_trippers.go:469] Request Headers:
	I1206 19:04:33.192209   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:04:33.192215   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:04:33.196017   83344 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1206 19:04:33.196045   83344 round_trippers.go:577] Response Headers:
	I1206 19:04:33.196057   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:04:33.196065   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:04:33.196072   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:04:33.196080   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:04:33 GMT
	I1206 19:04:33.196087   83344 round_trippers.go:580]     Audit-Id: ae2574c1-3937-46ab-a9f3-e53f473ac25d
	I1206 19:04:33.196093   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:04:33.196525   83344 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099-m02","uid":"4f57a17b-3ee2-40b9-bc65-252760c4ac03","resourceVersion":"473","creationTimestamp":"2023-12-06T19:04:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_06T19_04_27_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:04:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3166 chars]
	I1206 19:04:33.692205   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099-m02
	I1206 19:04:33.692233   83344 round_trippers.go:469] Request Headers:
	I1206 19:04:33.692241   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:04:33.692247   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:04:33.695157   83344 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:04:33.695183   83344 round_trippers.go:577] Response Headers:
	I1206 19:04:33.695194   83344 round_trippers.go:580]     Audit-Id: 9b7bfee8-3d89-46b5-b89a-9398f264d1d1
	I1206 19:04:33.695202   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:04:33.695210   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:04:33.695216   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:04:33.695221   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:04:33.695231   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:04:33 GMT
	I1206 19:04:33.695370   83344 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099-m02","uid":"4f57a17b-3ee2-40b9-bc65-252760c4ac03","resourceVersion":"473","creationTimestamp":"2023-12-06T19:04:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_06T19_04_27_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:04:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3166 chars]
	I1206 19:04:34.192020   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099-m02
	I1206 19:04:34.192047   83344 round_trippers.go:469] Request Headers:
	I1206 19:04:34.192058   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:04:34.192066   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:04:34.195223   83344 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1206 19:04:34.195258   83344 round_trippers.go:577] Response Headers:
	I1206 19:04:34.195268   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:04:34.195276   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:04:34.195283   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:04:34 GMT
	I1206 19:04:34.195290   83344 round_trippers.go:580]     Audit-Id: 8b370074-3bf2-4a4a-bb63-16aa1c9d5aae
	I1206 19:04:34.195297   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:04:34.195304   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:04:34.195865   83344 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099-m02","uid":"4f57a17b-3ee2-40b9-bc65-252760c4ac03","resourceVersion":"473","creationTimestamp":"2023-12-06T19:04:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_06T19_04_27_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:04:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3166 chars]
	I1206 19:04:34.196184   83344 node_ready.go:58] node "multinode-593099-m02" has status "Ready":"False"
	I1206 19:04:34.692205   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099-m02
	I1206 19:04:34.692228   83344 round_trippers.go:469] Request Headers:
	I1206 19:04:34.692246   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:04:34.692252   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:04:34.694730   83344 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:04:34.694757   83344 round_trippers.go:577] Response Headers:
	I1206 19:04:34.694767   83344 round_trippers.go:580]     Audit-Id: 061eb5aa-1e6b-4bba-aae9-0a7164dcf9ab
	I1206 19:04:34.694777   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:04:34.694785   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:04:34.694792   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:04:34.694800   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:04:34.694812   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:04:34 GMT
	I1206 19:04:34.695151   83344 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099-m02","uid":"4f57a17b-3ee2-40b9-bc65-252760c4ac03","resourceVersion":"473","creationTimestamp":"2023-12-06T19:04:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_06T19_04_27_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:04:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3166 chars]
	I1206 19:04:35.192619   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099-m02
	I1206 19:04:35.192643   83344 round_trippers.go:469] Request Headers:
	I1206 19:04:35.192656   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:04:35.192664   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:04:35.195455   83344 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:04:35.195480   83344 round_trippers.go:577] Response Headers:
	I1206 19:04:35.195489   83344 round_trippers.go:580]     Audit-Id: c6d6b303-e29f-4917-bb7f-35063997e5eb
	I1206 19:04:35.195497   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:04:35.195504   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:04:35.195512   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:04:35.195519   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:04:35.195527   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:04:35 GMT
	I1206 19:04:35.195700   83344 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099-m02","uid":"4f57a17b-3ee2-40b9-bc65-252760c4ac03","resourceVersion":"473","creationTimestamp":"2023-12-06T19:04:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_06T19_04_27_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:04:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3166 chars]
	I1206 19:04:35.692106   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099-m02
	I1206 19:04:35.692133   83344 round_trippers.go:469] Request Headers:
	I1206 19:04:35.692142   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:04:35.692148   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:04:35.694793   83344 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:04:35.694814   83344 round_trippers.go:577] Response Headers:
	I1206 19:04:35.694820   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:04:35.694826   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:04:35.694832   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:04:35.694841   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:04:35.694849   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:04:35 GMT
	I1206 19:04:35.694858   83344 round_trippers.go:580]     Audit-Id: e30f5149-a2f3-4232-a84b-16d5f73f0c66
	I1206 19:04:35.695123   83344 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099-m02","uid":"4f57a17b-3ee2-40b9-bc65-252760c4ac03","resourceVersion":"473","creationTimestamp":"2023-12-06T19:04:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_06T19_04_27_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:04:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3166 chars]
	I1206 19:04:36.192693   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099-m02
	I1206 19:04:36.192719   83344 round_trippers.go:469] Request Headers:
	I1206 19:04:36.192727   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:04:36.192733   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:04:36.195560   83344 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:04:36.195584   83344 round_trippers.go:577] Response Headers:
	I1206 19:04:36.195590   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:04:36 GMT
	I1206 19:04:36.195596   83344 round_trippers.go:580]     Audit-Id: bc1b6fe2-bcec-4062-b35a-6c4fcda12803
	I1206 19:04:36.195601   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:04:36.195606   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:04:36.195610   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:04:36.195616   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:04:36.195949   83344 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099-m02","uid":"4f57a17b-3ee2-40b9-bc65-252760c4ac03","resourceVersion":"473","creationTimestamp":"2023-12-06T19:04:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_06T19_04_27_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:04:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3166 chars]
	I1206 19:04:36.692697   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099-m02
	I1206 19:04:36.692726   83344 round_trippers.go:469] Request Headers:
	I1206 19:04:36.692734   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:04:36.692740   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:04:36.696223   83344 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1206 19:04:36.696242   83344 round_trippers.go:577] Response Headers:
	I1206 19:04:36.696249   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:04:36.696261   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:04:36.696278   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:04:36 GMT
	I1206 19:04:36.696290   83344 round_trippers.go:580]     Audit-Id: 6a594a0b-e351-423e-819d-665dd78d7a62
	I1206 19:04:36.696298   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:04:36.696303   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:04:36.696508   83344 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099-m02","uid":"4f57a17b-3ee2-40b9-bc65-252760c4ac03","resourceVersion":"483","creationTimestamp":"2023-12-06T19:04:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_06T19_04_27_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:04:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3252 chars]
	I1206 19:04:36.696794   83344 node_ready.go:49] node "multinode-593099-m02" has status "Ready":"True"
	I1206 19:04:36.696814   83344 node_ready.go:38] duration metric: took 8.512208701s waiting for node "multinode-593099-m02" to be "Ready" ...
	I1206 19:04:36.696826   83344 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 19:04:36.696903   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods
	I1206 19:04:36.696914   83344 round_trippers.go:469] Request Headers:
	I1206 19:04:36.696925   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:04:36.696940   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:04:36.701386   83344 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1206 19:04:36.701407   83344 round_trippers.go:577] Response Headers:
	I1206 19:04:36.701417   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:04:36.701428   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:04:36.701436   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:04:36 GMT
	I1206 19:04:36.701442   83344 round_trippers.go:580]     Audit-Id: 24d4cf29-05bd-47cb-8f3d-d875e9dbe23d
	I1206 19:04:36.701446   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:04:36.701451   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:04:36.703422   83344 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"483"},"items":[{"metadata":{"name":"coredns-5dd5756b68-h6rcq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"85247dde-4cee-482e-8f9b-a9e8f4e7172e","resourceVersion":"399","creationTimestamp":"2023-12-06T19:03:43Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d4bc00ef-7482-4e80-b416-7475ddc04c5d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4bc00ef-7482-4e80-b416-7475ddc04c5d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 67356 chars]
	I1206 19:04:36.706499   83344 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-h6rcq" in "kube-system" namespace to be "Ready" ...
	I1206 19:04:36.706590   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h6rcq
	I1206 19:04:36.706602   83344 round_trippers.go:469] Request Headers:
	I1206 19:04:36.706613   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:04:36.706626   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:04:36.711547   83344 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1206 19:04:36.711563   83344 round_trippers.go:577] Response Headers:
	I1206 19:04:36.711570   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:04:36.711576   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:04:36.711581   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:04:36.711589   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:04:36.711597   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:04:36 GMT
	I1206 19:04:36.711605   83344 round_trippers.go:580]     Audit-Id: 9f80c053-6301-408d-aed5-c36006ed98f4
	I1206 19:04:36.711756   83344 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-h6rcq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"85247dde-4cee-482e-8f9b-a9e8f4e7172e","resourceVersion":"399","creationTimestamp":"2023-12-06T19:03:43Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d4bc00ef-7482-4e80-b416-7475ddc04c5d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4bc00ef-7482-4e80-b416-7475ddc04c5d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I1206 19:04:36.712181   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:04:36.712196   83344 round_trippers.go:469] Request Headers:
	I1206 19:04:36.712203   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:04:36.712209   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:04:36.714222   83344 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1206 19:04:36.714244   83344 round_trippers.go:577] Response Headers:
	I1206 19:04:36.714252   83344 round_trippers.go:580]     Audit-Id: abc653e8-0756-44b3-b5c0-60199fe9333f
	I1206 19:04:36.714260   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:04:36.714271   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:04:36.714279   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:04:36.714296   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:04:36.714305   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:04:36 GMT
	I1206 19:04:36.714433   83344 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"378","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1206 19:04:36.714747   83344 pod_ready.go:92] pod "coredns-5dd5756b68-h6rcq" in "kube-system" namespace has status "Ready":"True"
	I1206 19:04:36.714763   83344 pod_ready.go:81] duration metric: took 8.236965ms waiting for pod "coredns-5dd5756b68-h6rcq" in "kube-system" namespace to be "Ready" ...
	I1206 19:04:36.714772   83344 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-593099" in "kube-system" namespace to be "Ready" ...
	I1206 19:04:36.714819   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-593099
	I1206 19:04:36.714826   83344 round_trippers.go:469] Request Headers:
	I1206 19:04:36.714833   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:04:36.714838   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:04:36.716671   83344 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1206 19:04:36.716690   83344 round_trippers.go:577] Response Headers:
	I1206 19:04:36.716699   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:04:36.716711   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:04:36.716719   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:04:36 GMT
	I1206 19:04:36.716726   83344 round_trippers.go:580]     Audit-Id: 0cb8edf0-60cb-442f-97fa-3c7b62fceb79
	I1206 19:04:36.716735   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:04:36.716746   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:04:36.717011   83344 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-593099","namespace":"kube-system","uid":"17573829-76f1-4718-80d6-248db178e8d0","resourceVersion":"275","creationTimestamp":"2023-12-06T19:03:29Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.125:2379","kubernetes.io/config.hash":"9ce14df981100c86a2ade94d91a33196","kubernetes.io/config.mirror":"9ce14df981100c86a2ade94d91a33196","kubernetes.io/config.seen":"2023-12-06T19:03:21.456077539Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I1206 19:04:36.717475   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:04:36.717494   83344 round_trippers.go:469] Request Headers:
	I1206 19:04:36.717504   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:04:36.717514   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:04:36.719668   83344 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:04:36.719684   83344 round_trippers.go:577] Response Headers:
	I1206 19:04:36.719690   83344 round_trippers.go:580]     Audit-Id: 5af46402-6606-4590-b8cc-4068ee35daf5
	I1206 19:04:36.719696   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:04:36.719701   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:04:36.719715   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:04:36.719728   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:04:36.719740   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:04:36 GMT
	I1206 19:04:36.720053   83344 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"378","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1206 19:04:36.720388   83344 pod_ready.go:92] pod "etcd-multinode-593099" in "kube-system" namespace has status "Ready":"True"
	I1206 19:04:36.720406   83344 pod_ready.go:81] duration metric: took 5.628983ms waiting for pod "etcd-multinode-593099" in "kube-system" namespace to be "Ready" ...
	I1206 19:04:36.720418   83344 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-593099" in "kube-system" namespace to be "Ready" ...
	I1206 19:04:36.720463   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-593099
	I1206 19:04:36.720471   83344 round_trippers.go:469] Request Headers:
	I1206 19:04:36.720478   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:04:36.720484   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:04:36.722664   83344 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:04:36.722679   83344 round_trippers.go:577] Response Headers:
	I1206 19:04:36.722687   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:04:36 GMT
	I1206 19:04:36.722695   83344 round_trippers.go:580]     Audit-Id: 26a208b6-5493-418a-9651-85dc9d1b917d
	I1206 19:04:36.722704   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:04:36.722713   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:04:36.722724   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:04:36.722729   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:04:36.722975   83344 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-593099","namespace":"kube-system","uid":"c32eea84-5395-4ffd-9fe4-51ae29b0861c","resourceVersion":"277","creationTimestamp":"2023-12-06T19:03:31Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.125:8443","kubernetes.io/config.hash":"6290493e5e32b3d1986ab88f381ba97f","kubernetes.io/config.mirror":"6290493e5e32b3d1986ab88f381ba97f","kubernetes.io/config.seen":"2023-12-06T19:03:30.652197401Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I1206 19:04:36.723440   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:04:36.723456   83344 round_trippers.go:469] Request Headers:
	I1206 19:04:36.723463   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:04:36.723469   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:04:36.725258   83344 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1206 19:04:36.725273   83344 round_trippers.go:577] Response Headers:
	I1206 19:04:36.725279   83344 round_trippers.go:580]     Audit-Id: af079aee-96ae-417c-acb6-8102a0feeedd
	I1206 19:04:36.725284   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:04:36.725289   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:04:36.725297   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:04:36.725306   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:04:36.725314   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:04:36 GMT
	I1206 19:04:36.725559   83344 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"378","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1206 19:04:36.725945   83344 pod_ready.go:92] pod "kube-apiserver-multinode-593099" in "kube-system" namespace has status "Ready":"True"
	I1206 19:04:36.725966   83344 pod_ready.go:81] duration metric: took 5.539152ms waiting for pod "kube-apiserver-multinode-593099" in "kube-system" namespace to be "Ready" ...
	I1206 19:04:36.725978   83344 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-593099" in "kube-system" namespace to be "Ready" ...
	I1206 19:04:36.726041   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-593099
	I1206 19:04:36.726052   83344 round_trippers.go:469] Request Headers:
	I1206 19:04:36.726062   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:04:36.726074   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:04:36.727748   83344 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1206 19:04:36.727761   83344 round_trippers.go:577] Response Headers:
	I1206 19:04:36.727766   83344 round_trippers.go:580]     Audit-Id: 6483c389-9259-4242-b74a-45be4013f8df
	I1206 19:04:36.727772   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:04:36.727777   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:04:36.727781   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:04:36.727786   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:04:36.727797   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:04:36 GMT
	I1206 19:04:36.728060   83344 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-593099","namespace":"kube-system","uid":"bd10545f-240d-418a-b4ca-a48c978a56c9","resourceVersion":"293","creationTimestamp":"2023-12-06T19:03:31Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e0f1a77aff616164d10d488d27b08307","kubernetes.io/config.mirror":"e0f1a77aff616164d10d488d27b08307","kubernetes.io/config.seen":"2023-12-06T19:03:30.652198715Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I1206 19:04:36.728537   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:04:36.728555   83344 round_trippers.go:469] Request Headers:
	I1206 19:04:36.728566   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:04:36.728575   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:04:36.730200   83344 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1206 19:04:36.730215   83344 round_trippers.go:577] Response Headers:
	I1206 19:04:36.730222   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:04:36 GMT
	I1206 19:04:36.730227   83344 round_trippers.go:580]     Audit-Id: 13bc6dcd-598c-4d22-8539-0df063ca1521
	I1206 19:04:36.730238   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:04:36.730248   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:04:36.730260   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:04:36.730271   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:04:36.730450   83344 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"378","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1206 19:04:36.730817   83344 pod_ready.go:92] pod "kube-controller-manager-multinode-593099" in "kube-system" namespace has status "Ready":"True"
	I1206 19:04:36.730839   83344 pod_ready.go:81] duration metric: took 4.846603ms waiting for pod "kube-controller-manager-multinode-593099" in "kube-system" namespace to be "Ready" ...
	I1206 19:04:36.730852   83344 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ggxmb" in "kube-system" namespace to be "Ready" ...
	I1206 19:04:36.893318   83344 request.go:629] Waited for 162.381763ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ggxmb
	I1206 19:04:36.893411   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ggxmb
	I1206 19:04:36.893419   83344 round_trippers.go:469] Request Headers:
	I1206 19:04:36.893433   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:04:36.893461   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:04:36.895995   83344 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:04:36.896021   83344 round_trippers.go:577] Response Headers:
	I1206 19:04:36.896031   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:04:36.896039   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:04:36.896046   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:04:36.896056   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:04:36 GMT
	I1206 19:04:36.896064   83344 round_trippers.go:580]     Audit-Id: a8acd998-0e87-4d48-b84f-9f2b5606c8e6
	I1206 19:04:36.896078   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:04:36.896297   83344 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ggxmb","generateName":"kube-proxy-","namespace":"kube-system","uid":"9967a10f-783d-4e8f-bb49-f609c7227307","resourceVersion":"470","creationTimestamp":"2023-12-06T19:04:27Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"9bd0b244-d31b-4ce9-a395-f0d7b9ee08be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:04:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9bd0b244-d31b-4ce9-a395-f0d7b9ee08be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I1206 19:04:37.093268   83344 request.go:629] Waited for 196.438829ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.125:8443/api/v1/nodes/multinode-593099-m02
	I1206 19:04:37.093359   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099-m02
	I1206 19:04:37.093365   83344 round_trippers.go:469] Request Headers:
	I1206 19:04:37.093372   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:04:37.093380   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:04:37.095931   83344 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:04:37.095952   83344 round_trippers.go:577] Response Headers:
	I1206 19:04:37.095958   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:04:37.095964   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:04:37.095970   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:04:37.095978   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:04:37 GMT
	I1206 19:04:37.095987   83344 round_trippers.go:580]     Audit-Id: cda2293c-b390-44c9-bf15-6083c6ca1557
	I1206 19:04:37.095999   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:04:37.096334   83344 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099-m02","uid":"4f57a17b-3ee2-40b9-bc65-252760c4ac03","resourceVersion":"483","creationTimestamp":"2023-12-06T19:04:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_06T19_04_27_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:04:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3252 chars]
	I1206 19:04:37.096693   83344 pod_ready.go:92] pod "kube-proxy-ggxmb" in "kube-system" namespace has status "Ready":"True"
	I1206 19:04:37.096714   83344 pod_ready.go:81] duration metric: took 365.850366ms waiting for pod "kube-proxy-ggxmb" in "kube-system" namespace to be "Ready" ...
	I1206 19:04:37.096735   83344 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-thqkt" in "kube-system" namespace to be "Ready" ...
	I1206 19:04:37.292710   83344 request.go:629] Waited for 195.883326ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/kube-proxy-thqkt
	I1206 19:04:37.292812   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/kube-proxy-thqkt
	I1206 19:04:37.292820   83344 round_trippers.go:469] Request Headers:
	I1206 19:04:37.292828   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:04:37.292838   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:04:37.296316   83344 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1206 19:04:37.296345   83344 round_trippers.go:577] Response Headers:
	I1206 19:04:37.296357   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:04:37.296365   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:04:37 GMT
	I1206 19:04:37.296374   83344 round_trippers.go:580]     Audit-Id: cd02d1dc-ab4a-4362-8c07-970cd397d135
	I1206 19:04:37.296382   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:04:37.296395   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:04:37.296403   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:04:37.296595   83344 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-thqkt","generateName":"kube-proxy-","namespace":"kube-system","uid":"0012fda4-56e7-4054-ab90-1704569e66e8","resourceVersion":"368","creationTimestamp":"2023-12-06T19:03:43Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"9bd0b244-d31b-4ce9-a395-f0d7b9ee08be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9bd0b244-d31b-4ce9-a395-f0d7b9ee08be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I1206 19:04:37.493529   83344 request.go:629] Waited for 196.373585ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:04:37.493604   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:04:37.493611   83344 round_trippers.go:469] Request Headers:
	I1206 19:04:37.493623   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:04:37.493633   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:04:37.496400   83344 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:04:37.496427   83344 round_trippers.go:577] Response Headers:
	I1206 19:04:37.496438   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:04:37.496447   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:04:37.496455   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:04:37.496463   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:04:37 GMT
	I1206 19:04:37.496477   83344 round_trippers.go:580]     Audit-Id: 52b4a9fa-a249-47a1-8503-678d763261b2
	I1206 19:04:37.496485   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:04:37.496614   83344 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"378","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1206 19:04:37.497015   83344 pod_ready.go:92] pod "kube-proxy-thqkt" in "kube-system" namespace has status "Ready":"True"
	I1206 19:04:37.497036   83344 pod_ready.go:81] duration metric: took 400.289447ms waiting for pod "kube-proxy-thqkt" in "kube-system" namespace to be "Ready" ...
	I1206 19:04:37.497050   83344 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-593099" in "kube-system" namespace to be "Ready" ...
	I1206 19:04:37.693402   83344 request.go:629] Waited for 196.270973ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-593099
	I1206 19:04:37.693465   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-593099
	I1206 19:04:37.693471   83344 round_trippers.go:469] Request Headers:
	I1206 19:04:37.693478   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:04:37.693487   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:04:37.696128   83344 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:04:37.696154   83344 round_trippers.go:577] Response Headers:
	I1206 19:04:37.696164   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:04:37.696172   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:04:37 GMT
	I1206 19:04:37.696177   83344 round_trippers.go:580]     Audit-Id: c877ceac-7eb7-4f69-910d-514209b5666a
	I1206 19:04:37.696182   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:04:37.696187   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:04:37.696194   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:04:37.696493   83344 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-593099","namespace":"kube-system","uid":"7ae8a659-33ba-4e2b-9211-8d84efe7e5a4","resourceVersion":"281","creationTimestamp":"2023-12-06T19:03:28Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"c031365adbae2937d228cc911fbfd7d4","kubernetes.io/config.mirror":"c031365adbae2937d228cc911fbfd7d4","kubernetes.io/config.seen":"2023-12-06T19:03:21.456083881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I1206 19:04:37.893246   83344 request.go:629] Waited for 196.35364ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:04:37.893326   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:04:37.893330   83344 round_trippers.go:469] Request Headers:
	I1206 19:04:37.893338   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:04:37.893344   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:04:37.896284   83344 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:04:37.896312   83344 round_trippers.go:577] Response Headers:
	I1206 19:04:37.896319   83344 round_trippers.go:580]     Audit-Id: 0c9758f1-26dd-4069-9a05-9ba323ce46d6
	I1206 19:04:37.896325   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:04:37.896330   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:04:37.896338   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:04:37.896343   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:04:37.896349   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:04:37 GMT
	I1206 19:04:37.897156   83344 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"378","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1206 19:04:37.897482   83344 pod_ready.go:92] pod "kube-scheduler-multinode-593099" in "kube-system" namespace has status "Ready":"True"
	I1206 19:04:37.897500   83344 pod_ready.go:81] duration metric: took 400.437633ms waiting for pod "kube-scheduler-multinode-593099" in "kube-system" namespace to be "Ready" ...
	I1206 19:04:37.897510   83344 pod_ready.go:38] duration metric: took 1.200668792s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 19:04:37.897538   83344 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 19:04:37.897592   83344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 19:04:37.910736   83344 system_svc.go:56] duration metric: took 13.19942ms WaitForService to wait for kubelet.
	I1206 19:04:37.910772   83344 kubeadm.go:581] duration metric: took 9.743512633s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1206 19:04:37.910797   83344 node_conditions.go:102] verifying NodePressure condition ...
	I1206 19:04:38.093293   83344 request.go:629] Waited for 182.400594ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.125:8443/api/v1/nodes
	I1206 19:04:38.093366   83344 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes
	I1206 19:04:38.093371   83344 round_trippers.go:469] Request Headers:
	I1206 19:04:38.093378   83344 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:04:38.093385   83344 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:04:38.096857   83344 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1206 19:04:38.096870   83344 round_trippers.go:577] Response Headers:
	I1206 19:04:38.096881   83344 round_trippers.go:580]     Content-Type: application/json
	I1206 19:04:38.096887   83344 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:04:38.096892   83344 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:04:38.096897   83344 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:04:38 GMT
	I1206 19:04:38.096902   83344 round_trippers.go:580]     Audit-Id: 35422c96-096d-448a-8472-2277de04838c
	I1206 19:04:38.096907   83344 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:04:38.097357   83344 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"486"},"items":[{"metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"378","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 10076 chars]
	I1206 19:04:38.097833   83344 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1206 19:04:38.097855   83344 node_conditions.go:123] node cpu capacity is 2
	I1206 19:04:38.097865   83344 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1206 19:04:38.097869   83344 node_conditions.go:123] node cpu capacity is 2
	I1206 19:04:38.097872   83344 node_conditions.go:105] duration metric: took 187.070603ms to run NodePressure ...
	I1206 19:04:38.097885   83344 start.go:228] waiting for startup goroutines ...
	I1206 19:04:38.097916   83344 start.go:242] writing updated cluster config ...
	I1206 19:04:38.098192   83344 ssh_runner.go:195] Run: rm -f paused
	I1206 19:04:38.147197   83344 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1206 19:04:38.150539   83344 out.go:177] * Done! kubectl is now configured to use "multinode-593099" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-12-06 19:02:58 UTC, ends at Wed 2023-12-06 19:04:45 UTC. --
	Dec 06 19:04:45 multinode-593099 crio[717]: time="2023-12-06 19:04:45.032275288Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:4e792c266ca4b35895e7539d32247730fa72bdc8eae3e17d32d60d69e54c3b5d,Metadata:&PodSandboxMetadata{Name:busybox-5bc68d56bd-x24l4,Uid:b2c96072-6364-4b62-9a74-2aa19b4a2e69,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701889479235164811,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-5bc68d56bd-x24l4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b2c96072-6364-4b62-9a74-2aa19b4a2e69,pod-template-hash: 5bc68d56bd,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-06T19:04:38.898990552Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d026c6370dd97fcab36935b517d24f35f1a6647f7c47e28a3582441ec8db5cf5,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:35974b37-5aff-4940-8e2d-5fec9d1e2166,Namespace:kube-system,Attempt:0,},St
ate:SANDBOX_READY,CreatedAt:1701889428956037732,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35974b37-5aff-4940-8e2d-5fec9d1e2166,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/
tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-12-06T19:03:48.614322286Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5c22a429e413b13b813ed0d8a0ca59b11dd08b50b1a8de968bc5191de9db25c9,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-h6rcq,Uid:85247dde-4cee-482e-8f9b-a9e8f4e7172e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701889428935888742,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-h6rcq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85247dde-4cee-482e-8f9b-a9e8f4e7172e,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-06T19:03:48.604760322Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:626962c7162a4f73bc48398e518cc33725ffc18edb325fda7a9ec131b31a9ebe,Metadata:&PodSandboxMetadata{Name:kube-proxy-thqkt,Uid:0012fda4-56e7-4054-ab90-1704569e66e8,Namespace:kube-system,At
tempt:0,},State:SANDBOX_READY,CreatedAt:1701889423784982543,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-thqkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0012fda4-56e7-4054-ab90-1704569e66e8,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-06T19:03:43.145544468Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d45f1d516f6299a92ca9d775cb2c2c0e32a45a22b94de5cda9068c4daeca324a,Metadata:&PodSandboxMetadata{Name:kindnet-x2r64,Uid:1dafec99-c18b-40ca-8b9d-b5d520390c8c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701889423756334537,Labels:map[string]string{app: kindnet,controller-revision-hash: 5666b6c4d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-x2r64,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dafec99-c18b-40ca-8b9d-b5d520390c8c,k8s-app: kindnet,pod-template-genera
tion: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-06T19:03:43.112916897Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:267da5e7a47c42af493021765fb48ecc69cc72239be9b4e7268b722e72f193aa,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-593099,Uid:e0f1a77aff616164d10d488d27b08307,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701889402602774193,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-593099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f1a77aff616164d10d488d27b08307,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: e0f1a77aff616164d10d488d27b08307,kubernetes.io/config.seen: 2023-12-06T19:03:21.456082950Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:37344960b2b4acdc5a7b3df5efbcced7840ac79f2f173af848b4e9949258aeac,Metadata:&PodSandboxMetada
ta{Name:etcd-multinode-593099,Uid:9ce14df981100c86a2ade94d91a33196,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701889402594924658,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-593099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ce14df981100c86a2ade94d91a33196,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.125:2379,kubernetes.io/config.hash: 9ce14df981100c86a2ade94d91a33196,kubernetes.io/config.seen: 2023-12-06T19:03:21.456077539Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0f24a01942585ba24877d880c2a5e106e8516f10f996974a431802b5312eaa64,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-593099,Uid:c031365adbae2937d228cc911fbfd7d4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701889402579782465,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,i
o.kubernetes.pod.name: kube-scheduler-multinode-593099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c031365adbae2937d228cc911fbfd7d4,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c031365adbae2937d228cc911fbfd7d4,kubernetes.io/config.seen: 2023-12-06T19:03:21.456083881Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:283f3d74b5d004e68170aac4c8f1c5fa7d0eb3edf7666dfd1370391038be21de,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-593099,Uid:6290493e5e32b3d1986ab88f381ba97f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701889402536131614,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-593099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6290493e5e32b3d1986ab88f381ba97f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.125:8443,kubern
etes.io/config.hash: 6290493e5e32b3d1986ab88f381ba97f,kubernetes.io/config.seen: 2023-12-06T19:03:21.456081863Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=98331e6e-794d-4019-800e-522932b3febd name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 06 19:04:45 multinode-593099 crio[717]: time="2023-12-06 19:04:45.033091636Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=85c4d9aa-0066-4fd4-b92d-4b1675588fd3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 19:04:45 multinode-593099 crio[717]: time="2023-12-06 19:04:45.033144573Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=85c4d9aa-0066-4fd4-b92d-4b1675588fd3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 19:04:45 multinode-593099 crio[717]: time="2023-12-06 19:04:45.033359319Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:21c93bb09b77ec03a9f2d9574b9caa35fbe40835be0044a2ec7732ae40954906,PodSandboxId:4e792c266ca4b35895e7539d32247730fa72bdc8eae3e17d32d60d69e54c3b5d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1701889480623354722,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-x24l4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b2c96072-6364-4b62-9a74-2aa19b4a2e69,},Annotations:map[string]string{io.kubernetes.container.hash: 34ab53fc,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b0910748767b44525dc011429cdcb2820a62ba7bbc316dafce9302d223fd6c0,PodSandboxId:5c22a429e413b13b813ed0d8a0ca59b11dd08b50b1a8de968bc5191de9db25c9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701889429681926473,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-h6rcq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85247dde-4cee-482e-8f9b-a9e8f4e7172e,},Annotations:map[string]string{io.kubernetes.container.hash: fcfaa392,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06802a3dfde454040fe6bb75be32037d2879a04c41e7a34446a3deb95faf5adc,PodSandboxId:d026c6370dd97fcab36935b517d24f35f1a6647f7c47e28a3582441ec8db5cf5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701889429446740061,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 35974b37-5aff-4940-8e2d-5fec9d1e2166,},Annotations:map[string]string{io.kubernetes.container.hash: 66b0258c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4adcad7d4b11e4d39b1472b658c077293090293a6b26f31e1cea5fd64242a533,PodSandboxId:d45f1d516f6299a92ca9d775cb2c2c0e32a45a22b94de5cda9068c4daeca324a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1701889426934662590,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x2r64,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 1dafec99-c18b-40ca-8b9d-b5d520390c8c,},Annotations:map[string]string{io.kubernetes.container.hash: de221942,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec4ffd2647082beb74b346f860cf360e4c949a01780b0650d3ea062781d047aa,PodSandboxId:626962c7162a4f73bc48398e518cc33725ffc18edb325fda7a9ec131b31a9ebe,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701889424490440461,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-thqkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0012fda4-56e7-4054-ab90-170456
9e66e8,},Annotations:map[string]string{io.kubernetes.container.hash: 69ba80c7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f46f38161a4cb5ca4c680a5fa5bc16bd08c41e0b382e033f919fa7b1e717596,PodSandboxId:37344960b2b4acdc5a7b3df5efbcced7840ac79f2f173af848b4e9949258aeac,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701889403642303311,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-593099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ce14df981100c86a2ade94d91a33196,},Annotations:map[string]string{io.kubernetes
.container.hash: d0de8d55,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da8adc32c9123d8a2896a7a70ed80f7dd0c4525658e62d7b0e738906487a21bc,PodSandboxId:0f24a01942585ba24877d880c2a5e106e8516f10f996974a431802b5312eaa64,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701889403303912758,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-593099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c031365adbae2937d228cc911fbfd7d4,},Annotations:map[string]string{io.kubernetes.container.h
ash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74f9d09bab401059c0ea3253349e049745994c4307f0bd283f1629149db4f07a,PodSandboxId:267da5e7a47c42af493021765fb48ecc69cc72239be9b4e7268b722e72f193aa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701889403346396987,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-593099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f1a77aff616164d10d488d27b08307,},Annotations:map[string]string{i
o.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d12dc683d1dba1543fc803ce878089f2d82893ac8cf6ddfd54be3345f2651af3,PodSandboxId:283f3d74b5d004e68170aac4c8f1c5fa7d0eb3edf7666dfd1370391038be21de,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701889403057984243,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-593099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6290493e5e32b3d1986ab88f381ba97f,},Annotations:map[string]string{io.kubernetes
.container.hash: 9422613e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=85c4d9aa-0066-4fd4-b92d-4b1675588fd3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 19:04:45 multinode-593099 crio[717]: time="2023-12-06 19:04:45.067397218Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=10cd0466-c972-4a6b-90cf-f5e38f82d7cb name=/runtime.v1.RuntimeService/Version
	Dec 06 19:04:45 multinode-593099 crio[717]: time="2023-12-06 19:04:45.067456361Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=10cd0466-c972-4a6b-90cf-f5e38f82d7cb name=/runtime.v1.RuntimeService/Version
	Dec 06 19:04:45 multinode-593099 crio[717]: time="2023-12-06 19:04:45.069272721Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=cf466d38-0762-42ed-901e-2edb23a26624 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 19:04:45 multinode-593099 crio[717]: time="2023-12-06 19:04:45.069815116Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701889485069797443,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=cf466d38-0762-42ed-901e-2edb23a26624 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 19:04:45 multinode-593099 crio[717]: time="2023-12-06 19:04:45.070515256Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=66b12b55-8c1e-414d-8819-0697799ece06 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 19:04:45 multinode-593099 crio[717]: time="2023-12-06 19:04:45.070673826Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=66b12b55-8c1e-414d-8819-0697799ece06 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 19:04:45 multinode-593099 crio[717]: time="2023-12-06 19:04:45.070889379Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:21c93bb09b77ec03a9f2d9574b9caa35fbe40835be0044a2ec7732ae40954906,PodSandboxId:4e792c266ca4b35895e7539d32247730fa72bdc8eae3e17d32d60d69e54c3b5d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1701889480623354722,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-x24l4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b2c96072-6364-4b62-9a74-2aa19b4a2e69,},Annotations:map[string]string{io.kubernetes.container.hash: 34ab53fc,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b0910748767b44525dc011429cdcb2820a62ba7bbc316dafce9302d223fd6c0,PodSandboxId:5c22a429e413b13b813ed0d8a0ca59b11dd08b50b1a8de968bc5191de9db25c9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701889429681926473,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-h6rcq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85247dde-4cee-482e-8f9b-a9e8f4e7172e,},Annotations:map[string]string{io.kubernetes.container.hash: fcfaa392,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06802a3dfde454040fe6bb75be32037d2879a04c41e7a34446a3deb95faf5adc,PodSandboxId:d026c6370dd97fcab36935b517d24f35f1a6647f7c47e28a3582441ec8db5cf5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701889429446740061,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 35974b37-5aff-4940-8e2d-5fec9d1e2166,},Annotations:map[string]string{io.kubernetes.container.hash: 66b0258c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4adcad7d4b11e4d39b1472b658c077293090293a6b26f31e1cea5fd64242a533,PodSandboxId:d45f1d516f6299a92ca9d775cb2c2c0e32a45a22b94de5cda9068c4daeca324a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1701889426934662590,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x2r64,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 1dafec99-c18b-40ca-8b9d-b5d520390c8c,},Annotations:map[string]string{io.kubernetes.container.hash: de221942,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec4ffd2647082beb74b346f860cf360e4c949a01780b0650d3ea062781d047aa,PodSandboxId:626962c7162a4f73bc48398e518cc33725ffc18edb325fda7a9ec131b31a9ebe,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701889424490440461,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-thqkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0012fda4-56e7-4054-ab90-170456
9e66e8,},Annotations:map[string]string{io.kubernetes.container.hash: 69ba80c7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f46f38161a4cb5ca4c680a5fa5bc16bd08c41e0b382e033f919fa7b1e717596,PodSandboxId:37344960b2b4acdc5a7b3df5efbcced7840ac79f2f173af848b4e9949258aeac,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701889403642303311,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-593099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ce14df981100c86a2ade94d91a33196,},Annotations:map[string]string{io.kubernetes
.container.hash: d0de8d55,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da8adc32c9123d8a2896a7a70ed80f7dd0c4525658e62d7b0e738906487a21bc,PodSandboxId:0f24a01942585ba24877d880c2a5e106e8516f10f996974a431802b5312eaa64,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701889403303912758,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-593099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c031365adbae2937d228cc911fbfd7d4,},Annotations:map[string]string{io.kubernetes.container.h
ash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74f9d09bab401059c0ea3253349e049745994c4307f0bd283f1629149db4f07a,PodSandboxId:267da5e7a47c42af493021765fb48ecc69cc72239be9b4e7268b722e72f193aa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701889403346396987,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-593099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f1a77aff616164d10d488d27b08307,},Annotations:map[string]string{i
o.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d12dc683d1dba1543fc803ce878089f2d82893ac8cf6ddfd54be3345f2651af3,PodSandboxId:283f3d74b5d004e68170aac4c8f1c5fa7d0eb3edf7666dfd1370391038be21de,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701889403057984243,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-593099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6290493e5e32b3d1986ab88f381ba97f,},Annotations:map[string]string{io.kubernetes
.container.hash: 9422613e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=66b12b55-8c1e-414d-8819-0697799ece06 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 19:04:45 multinode-593099 crio[717]: time="2023-12-06 19:04:45.113134636Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=5b08c93f-2c9e-4271-af27-011f77d338f9 name=/runtime.v1.RuntimeService/Version
	Dec 06 19:04:45 multinode-593099 crio[717]: time="2023-12-06 19:04:45.113222878Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=5b08c93f-2c9e-4271-af27-011f77d338f9 name=/runtime.v1.RuntimeService/Version
	Dec 06 19:04:45 multinode-593099 crio[717]: time="2023-12-06 19:04:45.115389454Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=7291faaa-47a8-4e96-af05-85350208d19c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 19:04:45 multinode-593099 crio[717]: time="2023-12-06 19:04:45.115870003Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701889485115854079,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=7291faaa-47a8-4e96-af05-85350208d19c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 19:04:45 multinode-593099 crio[717]: time="2023-12-06 19:04:45.116793227Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=08e44e6a-ab82-4047-a166-152def112d5d name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 19:04:45 multinode-593099 crio[717]: time="2023-12-06 19:04:45.116873026Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=08e44e6a-ab82-4047-a166-152def112d5d name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 19:04:45 multinode-593099 crio[717]: time="2023-12-06 19:04:45.117130768Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:21c93bb09b77ec03a9f2d9574b9caa35fbe40835be0044a2ec7732ae40954906,PodSandboxId:4e792c266ca4b35895e7539d32247730fa72bdc8eae3e17d32d60d69e54c3b5d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1701889480623354722,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-x24l4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b2c96072-6364-4b62-9a74-2aa19b4a2e69,},Annotations:map[string]string{io.kubernetes.container.hash: 34ab53fc,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b0910748767b44525dc011429cdcb2820a62ba7bbc316dafce9302d223fd6c0,PodSandboxId:5c22a429e413b13b813ed0d8a0ca59b11dd08b50b1a8de968bc5191de9db25c9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701889429681926473,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-h6rcq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85247dde-4cee-482e-8f9b-a9e8f4e7172e,},Annotations:map[string]string{io.kubernetes.container.hash: fcfaa392,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06802a3dfde454040fe6bb75be32037d2879a04c41e7a34446a3deb95faf5adc,PodSandboxId:d026c6370dd97fcab36935b517d24f35f1a6647f7c47e28a3582441ec8db5cf5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701889429446740061,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 35974b37-5aff-4940-8e2d-5fec9d1e2166,},Annotations:map[string]string{io.kubernetes.container.hash: 66b0258c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4adcad7d4b11e4d39b1472b658c077293090293a6b26f31e1cea5fd64242a533,PodSandboxId:d45f1d516f6299a92ca9d775cb2c2c0e32a45a22b94de5cda9068c4daeca324a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1701889426934662590,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x2r64,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 1dafec99-c18b-40ca-8b9d-b5d520390c8c,},Annotations:map[string]string{io.kubernetes.container.hash: de221942,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec4ffd2647082beb74b346f860cf360e4c949a01780b0650d3ea062781d047aa,PodSandboxId:626962c7162a4f73bc48398e518cc33725ffc18edb325fda7a9ec131b31a9ebe,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701889424490440461,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-thqkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0012fda4-56e7-4054-ab90-170456
9e66e8,},Annotations:map[string]string{io.kubernetes.container.hash: 69ba80c7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f46f38161a4cb5ca4c680a5fa5bc16bd08c41e0b382e033f919fa7b1e717596,PodSandboxId:37344960b2b4acdc5a7b3df5efbcced7840ac79f2f173af848b4e9949258aeac,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701889403642303311,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-593099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ce14df981100c86a2ade94d91a33196,},Annotations:map[string]string{io.kubernetes
.container.hash: d0de8d55,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da8adc32c9123d8a2896a7a70ed80f7dd0c4525658e62d7b0e738906487a21bc,PodSandboxId:0f24a01942585ba24877d880c2a5e106e8516f10f996974a431802b5312eaa64,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701889403303912758,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-593099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c031365adbae2937d228cc911fbfd7d4,},Annotations:map[string]string{io.kubernetes.container.h
ash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74f9d09bab401059c0ea3253349e049745994c4307f0bd283f1629149db4f07a,PodSandboxId:267da5e7a47c42af493021765fb48ecc69cc72239be9b4e7268b722e72f193aa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701889403346396987,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-593099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f1a77aff616164d10d488d27b08307,},Annotations:map[string]string{i
o.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d12dc683d1dba1543fc803ce878089f2d82893ac8cf6ddfd54be3345f2651af3,PodSandboxId:283f3d74b5d004e68170aac4c8f1c5fa7d0eb3edf7666dfd1370391038be21de,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701889403057984243,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-593099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6290493e5e32b3d1986ab88f381ba97f,},Annotations:map[string]string{io.kubernetes
.container.hash: 9422613e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=08e44e6a-ab82-4047-a166-152def112d5d name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 19:04:45 multinode-593099 crio[717]: time="2023-12-06 19:04:45.160678007Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=330ab134-fdfb-4c66-bc4d-23e3d52a6178 name=/runtime.v1.RuntimeService/Version
	Dec 06 19:04:45 multinode-593099 crio[717]: time="2023-12-06 19:04:45.160762879Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=330ab134-fdfb-4c66-bc4d-23e3d52a6178 name=/runtime.v1.RuntimeService/Version
	Dec 06 19:04:45 multinode-593099 crio[717]: time="2023-12-06 19:04:45.162451882Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=1468ef98-782d-4a20-b870-ad6cd8672d12 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 19:04:45 multinode-593099 crio[717]: time="2023-12-06 19:04:45.163042592Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701889485163023779,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=1468ef98-782d-4a20-b870-ad6cd8672d12 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 19:04:45 multinode-593099 crio[717]: time="2023-12-06 19:04:45.163961443Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=cd3279d9-1ff6-4111-97dd-848288a3b844 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 19:04:45 multinode-593099 crio[717]: time="2023-12-06 19:04:45.164043194Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=cd3279d9-1ff6-4111-97dd-848288a3b844 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 19:04:45 multinode-593099 crio[717]: time="2023-12-06 19:04:45.164252387Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:21c93bb09b77ec03a9f2d9574b9caa35fbe40835be0044a2ec7732ae40954906,PodSandboxId:4e792c266ca4b35895e7539d32247730fa72bdc8eae3e17d32d60d69e54c3b5d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1701889480623354722,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-x24l4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b2c96072-6364-4b62-9a74-2aa19b4a2e69,},Annotations:map[string]string{io.kubernetes.container.hash: 34ab53fc,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b0910748767b44525dc011429cdcb2820a62ba7bbc316dafce9302d223fd6c0,PodSandboxId:5c22a429e413b13b813ed0d8a0ca59b11dd08b50b1a8de968bc5191de9db25c9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701889429681926473,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-h6rcq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85247dde-4cee-482e-8f9b-a9e8f4e7172e,},Annotations:map[string]string{io.kubernetes.container.hash: fcfaa392,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06802a3dfde454040fe6bb75be32037d2879a04c41e7a34446a3deb95faf5adc,PodSandboxId:d026c6370dd97fcab36935b517d24f35f1a6647f7c47e28a3582441ec8db5cf5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701889429446740061,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 35974b37-5aff-4940-8e2d-5fec9d1e2166,},Annotations:map[string]string{io.kubernetes.container.hash: 66b0258c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4adcad7d4b11e4d39b1472b658c077293090293a6b26f31e1cea5fd64242a533,PodSandboxId:d45f1d516f6299a92ca9d775cb2c2c0e32a45a22b94de5cda9068c4daeca324a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1701889426934662590,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x2r64,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 1dafec99-c18b-40ca-8b9d-b5d520390c8c,},Annotations:map[string]string{io.kubernetes.container.hash: de221942,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec4ffd2647082beb74b346f860cf360e4c949a01780b0650d3ea062781d047aa,PodSandboxId:626962c7162a4f73bc48398e518cc33725ffc18edb325fda7a9ec131b31a9ebe,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701889424490440461,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-thqkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0012fda4-56e7-4054-ab90-170456
9e66e8,},Annotations:map[string]string{io.kubernetes.container.hash: 69ba80c7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f46f38161a4cb5ca4c680a5fa5bc16bd08c41e0b382e033f919fa7b1e717596,PodSandboxId:37344960b2b4acdc5a7b3df5efbcced7840ac79f2f173af848b4e9949258aeac,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701889403642303311,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-593099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ce14df981100c86a2ade94d91a33196,},Annotations:map[string]string{io.kubernetes
.container.hash: d0de8d55,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da8adc32c9123d8a2896a7a70ed80f7dd0c4525658e62d7b0e738906487a21bc,PodSandboxId:0f24a01942585ba24877d880c2a5e106e8516f10f996974a431802b5312eaa64,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701889403303912758,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-593099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c031365adbae2937d228cc911fbfd7d4,},Annotations:map[string]string{io.kubernetes.container.h
ash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74f9d09bab401059c0ea3253349e049745994c4307f0bd283f1629149db4f07a,PodSandboxId:267da5e7a47c42af493021765fb48ecc69cc72239be9b4e7268b722e72f193aa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701889403346396987,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-593099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f1a77aff616164d10d488d27b08307,},Annotations:map[string]string{i
o.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d12dc683d1dba1543fc803ce878089f2d82893ac8cf6ddfd54be3345f2651af3,PodSandboxId:283f3d74b5d004e68170aac4c8f1c5fa7d0eb3edf7666dfd1370391038be21de,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701889403057984243,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-593099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6290493e5e32b3d1986ab88f381ba97f,},Annotations:map[string]string{io.kubernetes
.container.hash: 9422613e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=cd3279d9-1ff6-4111-97dd-848288a3b844 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	21c93bb09b77e       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 seconds ago        Running             busybox                   0                   4e792c266ca4b       busybox-5bc68d56bd-x24l4
	4b0910748767b       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      55 seconds ago       Running             coredns                   0                   5c22a429e413b       coredns-5dd5756b68-h6rcq
	06802a3dfde45       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      55 seconds ago       Running             storage-provisioner       0                   d026c6370dd97       storage-provisioner
	4adcad7d4b11e       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      58 seconds ago       Running             kindnet-cni               0                   d45f1d516f629       kindnet-x2r64
	ec4ffd2647082       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      About a minute ago   Running             kube-proxy                0                   626962c7162a4       kube-proxy-thqkt
	2f46f38161a4c       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      About a minute ago   Running             etcd                      0                   37344960b2b4a       etcd-multinode-593099
	74f9d09bab401       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      About a minute ago   Running             kube-controller-manager   0                   267da5e7a47c4       kube-controller-manager-multinode-593099
	da8adc32c9123       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      About a minute ago   Running             kube-scheduler            0                   0f24a01942585       kube-scheduler-multinode-593099
	d12dc683d1dba       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      About a minute ago   Running             kube-apiserver            0                   283f3d74b5d00       kube-apiserver-multinode-593099
	
	* 
	* ==> coredns [4b0910748767b44525dc011429cdcb2820a62ba7bbc316dafce9302d223fd6c0] <==
	* [INFO] 10.244.1.2:39991 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000159388s
	[INFO] 10.244.0.3:35520 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000201625s
	[INFO] 10.244.0.3:39333 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001989413s
	[INFO] 10.244.0.3:58153 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000055281s
	[INFO] 10.244.0.3:58910 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000038938s
	[INFO] 10.244.0.3:55517 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001044657s
	[INFO] 10.244.0.3:42095 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000031448s
	[INFO] 10.244.0.3:58260 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000025407s
	[INFO] 10.244.0.3:59788 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000027763s
	[INFO] 10.244.1.2:48370 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000146961s
	[INFO] 10.244.1.2:43011 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000184285s
	[INFO] 10.244.1.2:34998 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00015972s
	[INFO] 10.244.1.2:44708 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000108651s
	[INFO] 10.244.0.3:57077 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000110211s
	[INFO] 10.244.0.3:39655 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000138599s
	[INFO] 10.244.0.3:34477 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000067777s
	[INFO] 10.244.0.3:53779 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000068891s
	[INFO] 10.244.1.2:54515 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000224227s
	[INFO] 10.244.1.2:51720 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000209495s
	[INFO] 10.244.1.2:36889 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000219618s
	[INFO] 10.244.1.2:43139 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000215459s
	[INFO] 10.244.0.3:56095 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000095608s
	[INFO] 10.244.0.3:53587 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000048445s
	[INFO] 10.244.0.3:51052 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000029762s
	[INFO] 10.244.0.3:60193 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000024143s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-593099
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-593099
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=31a3600ce72029d920a55140bbc6d0705e357503
	                    minikube.k8s.io/name=multinode-593099
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_06T19_03_31_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 06 Dec 2023 19:03:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-593099
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 06 Dec 2023 19:04:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 06 Dec 2023 19:03:48 +0000   Wed, 06 Dec 2023 19:03:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 06 Dec 2023 19:03:48 +0000   Wed, 06 Dec 2023 19:03:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 06 Dec 2023 19:03:48 +0000   Wed, 06 Dec 2023 19:03:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 06 Dec 2023 19:03:48 +0000   Wed, 06 Dec 2023 19:03:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.125
	  Hostname:    multinode-593099
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 9c9748df5a624dfd9135ae5ea21210d0
	  System UUID:                9c9748df-5a62-4dfd-9135-ae5ea21210d0
	  Boot ID:                    a008a028-efc7-4ed7-a6bb-50b2702aa03a
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-x24l4                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	  kube-system                 coredns-5dd5756b68-h6rcq                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     62s
	  kube-system                 etcd-multinode-593099                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         76s
	  kube-system                 kindnet-x2r64                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      62s
	  kube-system                 kube-apiserver-multinode-593099             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         74s
	  kube-system                 kube-controller-manager-multinode-593099    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         74s
	  kube-system                 kube-proxy-thqkt                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	  kube-system                 kube-scheduler-multinode-593099             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 60s   kube-proxy       
	  Normal  Starting                 75s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  75s   kubelet          Node multinode-593099 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    75s   kubelet          Node multinode-593099 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     75s   kubelet          Node multinode-593099 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  75s   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           63s   node-controller  Node multinode-593099 event: Registered Node multinode-593099 in Controller
	  Normal  NodeReady                57s   kubelet          Node multinode-593099 status is now: NodeReady
	
	
	Name:               multinode-593099-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-593099-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=31a3600ce72029d920a55140bbc6d0705e357503
	                    minikube.k8s.io/name=multinode-593099
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2023_12_06T19_04_27_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 06 Dec 2023 19:04:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-593099-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 06 Dec 2023 19:04:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 06 Dec 2023 19:04:36 +0000   Wed, 06 Dec 2023 19:04:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 06 Dec 2023 19:04:36 +0000   Wed, 06 Dec 2023 19:04:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 06 Dec 2023 19:04:36 +0000   Wed, 06 Dec 2023 19:04:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 06 Dec 2023 19:04:36 +0000   Wed, 06 Dec 2023 19:04:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.6
	  Hostname:    multinode-593099-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 859ea87d65b84b6c993011d17f29b172
	  System UUID:                859ea87d-65b8-4b6c-9930-11d17f29b172
	  Boot ID:                    d3d11353-8920-4f9a-adca-538cbebb3918
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-shdgj    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	  kube-system                 kindnet-2s5b8               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      18s
	  kube-system                 kube-proxy-ggxmb            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13s                kube-proxy       
	  Normal  NodeHasSufficientMemory  18s (x5 over 20s)  kubelet          Node multinode-593099-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18s (x5 over 20s)  kubelet          Node multinode-593099-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18s (x5 over 20s)  kubelet          Node multinode-593099-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13s                node-controller  Node multinode-593099-m02 event: Registered Node multinode-593099-m02 in Controller
	  Normal  NodeReady                9s                 kubelet          Node multinode-593099-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [Dec 6 19:02] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.069041] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.438515] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.096461] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.143216] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[Dec 6 19:03] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.851893] systemd-fstab-generator[642]: Ignoring "noauto" for root device
	[  +0.111098] systemd-fstab-generator[653]: Ignoring "noauto" for root device
	[  +0.142423] systemd-fstab-generator[666]: Ignoring "noauto" for root device
	[  +0.098873] systemd-fstab-generator[677]: Ignoring "noauto" for root device
	[  +0.206905] systemd-fstab-generator[701]: Ignoring "noauto" for root device
	[  +9.796058] systemd-fstab-generator[930]: Ignoring "noauto" for root device
	[  +9.268297] systemd-fstab-generator[1261]: Ignoring "noauto" for root device
	[ +19.528343] kauditd_printk_skb: 18 callbacks suppressed
	
	* 
	* ==> etcd [2f46f38161a4cb5ca4c680a5fa5bc16bd08c41e0b382e033f919fa7b1e717596] <==
	* {"level":"info","ts":"2023-12-06T19:03:25.4294Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4d3edba9e42b28c switched to configuration voters=(17641705551115235980)"}
	{"level":"info","ts":"2023-12-06T19:03:25.429473Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9838e9e2cfdaeabf","local-member-id":"f4d3edba9e42b28c","added-peer-id":"f4d3edba9e42b28c","added-peer-peer-urls":["https://192.168.39.125:2380"]}
	{"level":"info","ts":"2023-12-06T19:03:25.449315Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.125:2380"}
	{"level":"info","ts":"2023-12-06T19:03:25.449461Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.125:2380"}
	{"level":"info","ts":"2023-12-06T19:03:25.449488Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-12-06T19:03:25.453973Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f4d3edba9e42b28c","initial-advertise-peer-urls":["https://192.168.39.125:2380"],"listen-peer-urls":["https://192.168.39.125:2380"],"advertise-client-urls":["https://192.168.39.125:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.125:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-12-06T19:03:25.454034Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-12-06T19:03:25.882012Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4d3edba9e42b28c is starting a new election at term 1"}
	{"level":"info","ts":"2023-12-06T19:03:25.882112Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4d3edba9e42b28c became pre-candidate at term 1"}
	{"level":"info","ts":"2023-12-06T19:03:25.882164Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4d3edba9e42b28c received MsgPreVoteResp from f4d3edba9e42b28c at term 1"}
	{"level":"info","ts":"2023-12-06T19:03:25.882199Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4d3edba9e42b28c became candidate at term 2"}
	{"level":"info","ts":"2023-12-06T19:03:25.882223Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4d3edba9e42b28c received MsgVoteResp from f4d3edba9e42b28c at term 2"}
	{"level":"info","ts":"2023-12-06T19:03:25.88225Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4d3edba9e42b28c became leader at term 2"}
	{"level":"info","ts":"2023-12-06T19:03:25.882275Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f4d3edba9e42b28c elected leader f4d3edba9e42b28c at term 2"}
	{"level":"info","ts":"2023-12-06T19:03:25.883771Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f4d3edba9e42b28c","local-member-attributes":"{Name:multinode-593099 ClientURLs:[https://192.168.39.125:2379]}","request-path":"/0/members/f4d3edba9e42b28c/attributes","cluster-id":"9838e9e2cfdaeabf","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-06T19:03:25.883773Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-06T19:03:25.883912Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-06T19:03:25.883957Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-06T19:03:25.883975Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-06T19:03:25.884809Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9838e9e2cfdaeabf","local-member-id":"f4d3edba9e42b28c","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-06T19:03:25.884957Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-06T19:03:25.884996Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-06T19:03:25.885024Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-06T19:03:25.885126Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-06T19:03:25.886035Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.125:2379"}
	
	* 
	* ==> kernel <==
	*  19:04:45 up 1 min,  0 users,  load average: 0.33, 0.19, 0.07
	Linux multinode-593099 5.10.57 #1 SMP Fri Dec 1 04:24:04 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [4adcad7d4b11e4d39b1472b658c077293090293a6b26f31e1cea5fd64242a533] <==
	* I1206 19:03:47.701079       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1206 19:03:47.701234       1 main.go:107] hostIP = 192.168.39.125
	podIP = 192.168.39.125
	I1206 19:03:47.701510       1 main.go:116] setting mtu 1500 for CNI 
	I1206 19:03:47.701561       1 main.go:146] kindnetd IP family: "ipv4"
	I1206 19:03:47.701717       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1206 19:03:48.297151       1 main.go:223] Handling node with IPs: map[192.168.39.125:{}]
	I1206 19:03:48.297543       1 main.go:227] handling current node
	I1206 19:03:58.311199       1 main.go:223] Handling node with IPs: map[192.168.39.125:{}]
	I1206 19:03:58.311330       1 main.go:227] handling current node
	I1206 19:04:08.316713       1 main.go:223] Handling node with IPs: map[192.168.39.125:{}]
	I1206 19:04:08.316872       1 main.go:227] handling current node
	I1206 19:04:18.326351       1 main.go:223] Handling node with IPs: map[192.168.39.125:{}]
	I1206 19:04:18.326551       1 main.go:227] handling current node
	I1206 19:04:28.336234       1 main.go:223] Handling node with IPs: map[192.168.39.125:{}]
	I1206 19:04:28.336481       1 main.go:227] handling current node
	I1206 19:04:28.336530       1 main.go:223] Handling node with IPs: map[192.168.39.6:{}]
	I1206 19:04:28.336556       1 main.go:250] Node multinode-593099-m02 has CIDR [10.244.1.0/24] 
	I1206 19:04:28.336846       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.39.6 Flags: [] Table: 0} 
	I1206 19:04:38.349847       1 main.go:223] Handling node with IPs: map[192.168.39.125:{}]
	I1206 19:04:38.349963       1 main.go:227] handling current node
	I1206 19:04:38.349986       1 main.go:223] Handling node with IPs: map[192.168.39.6:{}]
	I1206 19:04:38.350005       1 main.go:250] Node multinode-593099-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [d12dc683d1dba1543fc803ce878089f2d82893ac8cf6ddfd54be3345f2651af3] <==
	* I1206 19:03:27.538015       1 controller.go:624] quota admission added evaluator for: namespaces
	I1206 19:03:27.538710       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1206 19:03:27.539015       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1206 19:03:27.540427       1 aggregator.go:166] initial CRD sync complete...
	I1206 19:03:27.540465       1 autoregister_controller.go:141] Starting autoregister controller
	I1206 19:03:27.540490       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1206 19:03:27.540514       1 cache.go:39] Caches are synced for autoregister controller
	I1206 19:03:27.539186       1 shared_informer.go:318] Caches are synced for configmaps
	I1206 19:03:27.563072       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1206 19:03:27.573342       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1206 19:03:28.340158       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1206 19:03:28.345745       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1206 19:03:28.345793       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1206 19:03:29.021883       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1206 19:03:29.069532       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1206 19:03:29.167357       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1206 19:03:29.181921       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.125]
	I1206 19:03:29.184049       1 controller.go:624] quota admission added evaluator for: endpoints
	I1206 19:03:29.194820       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1206 19:03:29.458502       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1206 19:03:30.502171       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1206 19:03:30.526638       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1206 19:03:30.554819       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1206 19:03:43.074094       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1206 19:03:43.285189       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-controller-manager [74f9d09bab401059c0ea3253349e049745994c4307f0bd283f1629149db4f07a] <==
	* I1206 19:03:43.987339       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="254.034µs"
	I1206 19:03:48.607917       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="452.07µs"
	I1206 19:03:48.633217       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="86.773µs"
	I1206 19:03:49.863770       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="218.719µs"
	I1206 19:03:50.868982       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="14.180971ms"
	I1206 19:03:50.869396       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="214.206µs"
	I1206 19:03:52.290141       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1206 19:04:27.319856       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-593099-m02\" does not exist"
	I1206 19:04:27.349139       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-593099-m02" podCIDRs=["10.244.1.0/24"]
	I1206 19:04:27.357532       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-2s5b8"
	I1206 19:04:27.357786       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-ggxmb"
	I1206 19:04:32.296938       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-593099-m02"
	I1206 19:04:32.297269       1 event.go:307] "Event occurred" object="multinode-593099-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-593099-m02 event: Registered Node multinode-593099-m02 in Controller"
	I1206 19:04:36.283772       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-593099-m02"
	I1206 19:04:38.841900       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I1206 19:04:38.864264       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-shdgj"
	I1206 19:04:38.879479       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-x24l4"
	I1206 19:04:38.903025       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="61.475833ms"
	I1206 19:04:38.917788       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="14.585937ms"
	I1206 19:04:38.938704       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="20.813098ms"
	I1206 19:04:38.938854       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="44.024µs"
	I1206 19:04:41.041957       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="10.097448ms"
	I1206 19:04:41.042387       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="165.83µs"
	I1206 19:04:41.325816       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="13.964427ms"
	I1206 19:04:41.326100       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="46.96µs"
	
	* 
	* ==> kube-proxy [ec4ffd2647082beb74b346f860cf360e4c949a01780b0650d3ea062781d047aa] <==
	* I1206 19:03:44.661881       1 server_others.go:69] "Using iptables proxy"
	I1206 19:03:44.673719       1 node.go:141] Successfully retrieved node IP: 192.168.39.125
	I1206 19:03:44.732373       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1206 19:03:44.732440       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1206 19:03:44.735641       1 server_others.go:152] "Using iptables Proxier"
	I1206 19:03:44.735715       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1206 19:03:44.735944       1 server.go:846] "Version info" version="v1.28.4"
	I1206 19:03:44.735987       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 19:03:44.736808       1 config.go:188] "Starting service config controller"
	I1206 19:03:44.736861       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1206 19:03:44.736894       1 config.go:97] "Starting endpoint slice config controller"
	I1206 19:03:44.736909       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1206 19:03:44.739009       1 config.go:315] "Starting node config controller"
	I1206 19:03:44.739064       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1206 19:03:44.837783       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1206 19:03:44.837863       1 shared_informer.go:318] Caches are synced for service config
	I1206 19:03:44.839427       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [da8adc32c9123d8a2896a7a70ed80f7dd0c4525658e62d7b0e738906487a21bc] <==
	* W1206 19:03:27.472568       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1206 19:03:27.473363       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1206 19:03:27.472796       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1206 19:03:27.473409       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1206 19:03:27.472835       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1206 19:03:27.475431       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1206 19:03:27.475778       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1206 19:03:27.475789       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1206 19:03:27.475837       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1206 19:03:27.475553       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1206 19:03:27.475882       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1206 19:03:27.475811       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1206 19:03:28.290215       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1206 19:03:28.290271       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1206 19:03:28.299273       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1206 19:03:28.299402       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1206 19:03:28.398749       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1206 19:03:28.398844       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1206 19:03:28.403665       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1206 19:03:28.403714       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1206 19:03:28.547711       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1206 19:03:28.547806       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1206 19:03:28.653316       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1206 19:03:28.653404       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I1206 19:03:30.161393       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-12-06 19:02:58 UTC, ends at Wed 2023-12-06 19:04:45 UTC. --
	Dec 06 19:03:43 multinode-593099 kubelet[1268]: I1206 19:03:43.221687    1268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdm5v\" (UniqueName: \"kubernetes.io/projected/0012fda4-56e7-4054-ab90-1704569e66e8-kube-api-access-tdm5v\") pod \"kube-proxy-thqkt\" (UID: \"0012fda4-56e7-4054-ab90-1704569e66e8\") " pod="kube-system/kube-proxy-thqkt"
	Dec 06 19:03:43 multinode-593099 kubelet[1268]: I1206 19:03:43.221735    1268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1dafec99-c18b-40ca-8b9d-b5d520390c8c-lib-modules\") pod \"kindnet-x2r64\" (UID: \"1dafec99-c18b-40ca-8b9d-b5d520390c8c\") " pod="kube-system/kindnet-x2r64"
	Dec 06 19:03:43 multinode-593099 kubelet[1268]: I1206 19:03:43.221756    1268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0012fda4-56e7-4054-ab90-1704569e66e8-kube-proxy\") pod \"kube-proxy-thqkt\" (UID: \"0012fda4-56e7-4054-ab90-1704569e66e8\") " pod="kube-system/kube-proxy-thqkt"
	Dec 06 19:03:43 multinode-593099 kubelet[1268]: I1206 19:03:43.221774    1268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0012fda4-56e7-4054-ab90-1704569e66e8-xtables-lock\") pod \"kube-proxy-thqkt\" (UID: \"0012fda4-56e7-4054-ab90-1704569e66e8\") " pod="kube-system/kube-proxy-thqkt"
	Dec 06 19:03:43 multinode-593099 kubelet[1268]: I1206 19:03:43.221842    1268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/1dafec99-c18b-40ca-8b9d-b5d520390c8c-cni-cfg\") pod \"kindnet-x2r64\" (UID: \"1dafec99-c18b-40ca-8b9d-b5d520390c8c\") " pod="kube-system/kindnet-x2r64"
	Dec 06 19:03:43 multinode-593099 kubelet[1268]: I1206 19:03:43.221866    1268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1dafec99-c18b-40ca-8b9d-b5d520390c8c-xtables-lock\") pod \"kindnet-x2r64\" (UID: \"1dafec99-c18b-40ca-8b9d-b5d520390c8c\") " pod="kube-system/kindnet-x2r64"
	Dec 06 19:03:43 multinode-593099 kubelet[1268]: I1206 19:03:43.221884    1268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0012fda4-56e7-4054-ab90-1704569e66e8-lib-modules\") pod \"kube-proxy-thqkt\" (UID: \"0012fda4-56e7-4054-ab90-1704569e66e8\") " pod="kube-system/kube-proxy-thqkt"
	Dec 06 19:03:43 multinode-593099 kubelet[1268]: I1206 19:03:43.221903    1268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqjfq\" (UniqueName: \"kubernetes.io/projected/1dafec99-c18b-40ca-8b9d-b5d520390c8c-kube-api-access-cqjfq\") pod \"kindnet-x2r64\" (UID: \"1dafec99-c18b-40ca-8b9d-b5d520390c8c\") " pod="kube-system/kindnet-x2r64"
	Dec 06 19:03:47 multinode-593099 kubelet[1268]: I1206 19:03:47.827325    1268 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-thqkt" podStartSLOduration=4.827289465 podCreationTimestamp="2023-12-06 19:03:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-06 19:03:44.81907913 +0000 UTC m=+14.340644722" watchObservedRunningTime="2023-12-06 19:03:47.827289465 +0000 UTC m=+17.348855057"
	Dec 06 19:03:48 multinode-593099 kubelet[1268]: I1206 19:03:48.561979    1268 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Dec 06 19:03:48 multinode-593099 kubelet[1268]: I1206 19:03:48.604644    1268 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-x2r64" podStartSLOduration=5.604539508 podCreationTimestamp="2023-12-06 19:03:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-06 19:03:47.829482943 +0000 UTC m=+17.351048535" watchObservedRunningTime="2023-12-06 19:03:48.604539508 +0000 UTC m=+18.126105100"
	Dec 06 19:03:48 multinode-593099 kubelet[1268]: I1206 19:03:48.604844    1268 topology_manager.go:215] "Topology Admit Handler" podUID="85247dde-4cee-482e-8f9b-a9e8f4e7172e" podNamespace="kube-system" podName="coredns-5dd5756b68-h6rcq"
	Dec 06 19:03:48 multinode-593099 kubelet[1268]: I1206 19:03:48.614446    1268 topology_manager.go:215] "Topology Admit Handler" podUID="35974b37-5aff-4940-8e2d-5fec9d1e2166" podNamespace="kube-system" podName="storage-provisioner"
	Dec 06 19:03:48 multinode-593099 kubelet[1268]: I1206 19:03:48.663216    1268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2b69\" (UniqueName: \"kubernetes.io/projected/35974b37-5aff-4940-8e2d-5fec9d1e2166-kube-api-access-q2b69\") pod \"storage-provisioner\" (UID: \"35974b37-5aff-4940-8e2d-5fec9d1e2166\") " pod="kube-system/storage-provisioner"
	Dec 06 19:03:48 multinode-593099 kubelet[1268]: I1206 19:03:48.663258    1268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md8ng\" (UniqueName: \"kubernetes.io/projected/85247dde-4cee-482e-8f9b-a9e8f4e7172e-kube-api-access-md8ng\") pod \"coredns-5dd5756b68-h6rcq\" (UID: \"85247dde-4cee-482e-8f9b-a9e8f4e7172e\") " pod="kube-system/coredns-5dd5756b68-h6rcq"
	Dec 06 19:03:48 multinode-593099 kubelet[1268]: I1206 19:03:48.663280    1268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/85247dde-4cee-482e-8f9b-a9e8f4e7172e-config-volume\") pod \"coredns-5dd5756b68-h6rcq\" (UID: \"85247dde-4cee-482e-8f9b-a9e8f4e7172e\") " pod="kube-system/coredns-5dd5756b68-h6rcq"
	Dec 06 19:03:48 multinode-593099 kubelet[1268]: I1206 19:03:48.663298    1268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/35974b37-5aff-4940-8e2d-5fec9d1e2166-tmp\") pod \"storage-provisioner\" (UID: \"35974b37-5aff-4940-8e2d-5fec9d1e2166\") " pod="kube-system/storage-provisioner"
	Dec 06 19:03:49 multinode-593099 kubelet[1268]: I1206 19:03:49.862070    1268 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=5.862031033 podCreationTimestamp="2023-12-06 19:03:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-06 19:03:49.846810493 +0000 UTC m=+19.368376079" watchObservedRunningTime="2023-12-06 19:03:49.862031033 +0000 UTC m=+19.383596624"
	Dec 06 19:03:50 multinode-593099 kubelet[1268]: I1206 19:03:50.848292    1268 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-h6rcq" podStartSLOduration=7.848257378 podCreationTimestamp="2023-12-06 19:03:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-06 19:03:49.863142872 +0000 UTC m=+19.384708466" watchObservedRunningTime="2023-12-06 19:03:50.848257378 +0000 UTC m=+20.369822969"
	Dec 06 19:04:30 multinode-593099 kubelet[1268]: E1206 19:04:30.738194    1268 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 06 19:04:30 multinode-593099 kubelet[1268]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 06 19:04:30 multinode-593099 kubelet[1268]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 06 19:04:30 multinode-593099 kubelet[1268]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 06 19:04:38 multinode-593099 kubelet[1268]: I1206 19:04:38.899441    1268 topology_manager.go:215] "Topology Admit Handler" podUID="b2c96072-6364-4b62-9a74-2aa19b4a2e69" podNamespace="default" podName="busybox-5bc68d56bd-x24l4"
	Dec 06 19:04:38 multinode-593099 kubelet[1268]: I1206 19:04:38.974973    1268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pk2vp\" (UniqueName: \"kubernetes.io/projected/b2c96072-6364-4b62-9a74-2aa19b4a2e69-kube-api-access-pk2vp\") pod \"busybox-5bc68d56bd-x24l4\" (UID: \"b2c96072-6364-4b62-9a74-2aa19b4a2e69\") " pod="default/busybox-5bc68d56bd-x24l4"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-593099 -n multinode-593099
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-593099 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.34s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (693.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-593099
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-593099
E1206 19:06:19.211114   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/client.crt: no such file or directory
E1206 19:07:54.631361   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/functional-317483/client.crt: no such file or directory
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-593099: exit status 82 (2m1.677883079s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-593099"  ...
	* Stopping node "multinode-593099"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:320: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-593099" : exit status 82
multinode_test.go:323: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-593099 --wait=true -v=8 --alsologtostderr
E1206 19:08:22.657375   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/addons-463584/client.crt: no such file or directory
E1206 19:09:45.703388   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/addons-463584/client.crt: no such file or directory
E1206 19:10:51.525138   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/client.crt: no such file or directory
E1206 19:12:54.632203   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/functional-317483/client.crt: no such file or directory
E1206 19:13:22.657640   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/addons-463584/client.crt: no such file or directory
E1206 19:14:17.678654   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/functional-317483/client.crt: no such file or directory
E1206 19:15:51.525765   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/client.crt: no such file or directory
E1206 19:17:14.572452   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-amd64 start -p multinode-593099 --wait=true -v=8 --alsologtostderr: (9m28.176303037s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-593099
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-593099 -n multinode-593099
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-593099 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-593099 logs -n 25: (1.70185776s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-593099 ssh -n                                                                 | multinode-593099 | jenkins | v1.32.0 | 06 Dec 23 19:05 UTC | 06 Dec 23 19:05 UTC |
	|         | multinode-593099-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-593099 cp multinode-593099-m02:/home/docker/cp-test.txt                       | multinode-593099 | jenkins | v1.32.0 | 06 Dec 23 19:05 UTC | 06 Dec 23 19:05 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile136929012/001/cp-test_multinode-593099-m02.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-593099 ssh -n                                                                 | multinode-593099 | jenkins | v1.32.0 | 06 Dec 23 19:05 UTC | 06 Dec 23 19:05 UTC |
	|         | multinode-593099-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-593099 cp multinode-593099-m02:/home/docker/cp-test.txt                       | multinode-593099 | jenkins | v1.32.0 | 06 Dec 23 19:05 UTC | 06 Dec 23 19:05 UTC |
	|         | multinode-593099:/home/docker/cp-test_multinode-593099-m02_multinode-593099.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-593099 ssh -n                                                                 | multinode-593099 | jenkins | v1.32.0 | 06 Dec 23 19:05 UTC | 06 Dec 23 19:05 UTC |
	|         | multinode-593099-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-593099 ssh -n multinode-593099 sudo cat                                       | multinode-593099 | jenkins | v1.32.0 | 06 Dec 23 19:05 UTC | 06 Dec 23 19:05 UTC |
	|         | /home/docker/cp-test_multinode-593099-m02_multinode-593099.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-593099 cp multinode-593099-m02:/home/docker/cp-test.txt                       | multinode-593099 | jenkins | v1.32.0 | 06 Dec 23 19:05 UTC | 06 Dec 23 19:05 UTC |
	|         | multinode-593099-m03:/home/docker/cp-test_multinode-593099-m02_multinode-593099-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-593099 ssh -n                                                                 | multinode-593099 | jenkins | v1.32.0 | 06 Dec 23 19:05 UTC | 06 Dec 23 19:05 UTC |
	|         | multinode-593099-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-593099 ssh -n multinode-593099-m03 sudo cat                                   | multinode-593099 | jenkins | v1.32.0 | 06 Dec 23 19:05 UTC | 06 Dec 23 19:05 UTC |
	|         | /home/docker/cp-test_multinode-593099-m02_multinode-593099-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-593099 cp testdata/cp-test.txt                                                | multinode-593099 | jenkins | v1.32.0 | 06 Dec 23 19:05 UTC | 06 Dec 23 19:05 UTC |
	|         | multinode-593099-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-593099 ssh -n                                                                 | multinode-593099 | jenkins | v1.32.0 | 06 Dec 23 19:05 UTC | 06 Dec 23 19:05 UTC |
	|         | multinode-593099-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-593099 cp multinode-593099-m03:/home/docker/cp-test.txt                       | multinode-593099 | jenkins | v1.32.0 | 06 Dec 23 19:05 UTC | 06 Dec 23 19:05 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile136929012/001/cp-test_multinode-593099-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-593099 ssh -n                                                                 | multinode-593099 | jenkins | v1.32.0 | 06 Dec 23 19:05 UTC | 06 Dec 23 19:05 UTC |
	|         | multinode-593099-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-593099 cp multinode-593099-m03:/home/docker/cp-test.txt                       | multinode-593099 | jenkins | v1.32.0 | 06 Dec 23 19:05 UTC | 06 Dec 23 19:05 UTC |
	|         | multinode-593099:/home/docker/cp-test_multinode-593099-m03_multinode-593099.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-593099 ssh -n                                                                 | multinode-593099 | jenkins | v1.32.0 | 06 Dec 23 19:05 UTC | 06 Dec 23 19:05 UTC |
	|         | multinode-593099-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-593099 ssh -n multinode-593099 sudo cat                                       | multinode-593099 | jenkins | v1.32.0 | 06 Dec 23 19:05 UTC | 06 Dec 23 19:05 UTC |
	|         | /home/docker/cp-test_multinode-593099-m03_multinode-593099.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-593099 cp multinode-593099-m03:/home/docker/cp-test.txt                       | multinode-593099 | jenkins | v1.32.0 | 06 Dec 23 19:05 UTC | 06 Dec 23 19:05 UTC |
	|         | multinode-593099-m02:/home/docker/cp-test_multinode-593099-m03_multinode-593099-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-593099 ssh -n                                                                 | multinode-593099 | jenkins | v1.32.0 | 06 Dec 23 19:05 UTC | 06 Dec 23 19:05 UTC |
	|         | multinode-593099-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-593099 ssh -n multinode-593099-m02 sudo cat                                   | multinode-593099 | jenkins | v1.32.0 | 06 Dec 23 19:05 UTC | 06 Dec 23 19:05 UTC |
	|         | /home/docker/cp-test_multinode-593099-m03_multinode-593099-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-593099 node stop m03                                                          | multinode-593099 | jenkins | v1.32.0 | 06 Dec 23 19:05 UTC | 06 Dec 23 19:05 UTC |
	| node    | multinode-593099 node start                                                             | multinode-593099 | jenkins | v1.32.0 | 06 Dec 23 19:05 UTC | 06 Dec 23 19:06 UTC |
	|         | m03 --alsologtostderr                                                                   |                  |         |         |                     |                     |
	| node    | list -p multinode-593099                                                                | multinode-593099 | jenkins | v1.32.0 | 06 Dec 23 19:06 UTC |                     |
	| stop    | -p multinode-593099                                                                     | multinode-593099 | jenkins | v1.32.0 | 06 Dec 23 19:06 UTC |                     |
	| start   | -p multinode-593099                                                                     | multinode-593099 | jenkins | v1.32.0 | 06 Dec 23 19:08 UTC | 06 Dec 23 19:17 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-593099                                                                | multinode-593099 | jenkins | v1.32.0 | 06 Dec 23 19:17 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/06 19:08:11
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 19:08:11.509355   86706 out.go:296] Setting OutFile to fd 1 ...
	I1206 19:08:11.509678   86706 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 19:08:11.509688   86706 out.go:309] Setting ErrFile to fd 2...
	I1206 19:08:11.509693   86706 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 19:08:11.509880   86706 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17740-63652/.minikube/bin
	I1206 19:08:11.510453   86706 out.go:303] Setting JSON to false
	I1206 19:08:11.511348   86706 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":6642,"bootTime":1701883050,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 19:08:11.511406   86706 start.go:138] virtualization: kvm guest
	I1206 19:08:11.514168   86706 out.go:177] * [multinode-593099] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1206 19:08:11.515832   86706 out.go:177]   - MINIKUBE_LOCATION=17740
	I1206 19:08:11.515848   86706 notify.go:220] Checking for updates...
	I1206 19:08:11.517334   86706 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 19:08:11.518761   86706 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17740-63652/kubeconfig
	I1206 19:08:11.520345   86706 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17740-63652/.minikube
	I1206 19:08:11.521808   86706 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 19:08:11.523102   86706 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 19:08:11.524798   86706 config.go:182] Loaded profile config "multinode-593099": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 19:08:11.524878   86706 driver.go:392] Setting default libvirt URI to qemu:///system
	I1206 19:08:11.525336   86706 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:08:11.525373   86706 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:08:11.539569   86706 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43075
	I1206 19:08:11.539988   86706 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:08:11.540547   86706 main.go:141] libmachine: Using API Version  1
	I1206 19:08:11.540585   86706 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:08:11.540921   86706 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:08:11.541109   86706 main.go:141] libmachine: (multinode-593099) Calling .DriverName
	I1206 19:08:11.575787   86706 out.go:177] * Using the kvm2 driver based on existing profile
	I1206 19:08:11.577137   86706 start.go:298] selected driver: kvm2
	I1206 19:08:11.577150   86706 start.go:902] validating driver "kvm2" against &{Name:multinode-593099 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.4 ClusterName:multinode-593099 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.6 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.194 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false
ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 19:08:11.577369   86706 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 19:08:11.577705   86706 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 19:08:11.577812   86706 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17740-63652/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1206 19:08:11.592493   86706 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1206 19:08:11.593193   86706 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 19:08:11.593287   86706 cni.go:84] Creating CNI manager for ""
	I1206 19:08:11.593301   86706 cni.go:136] 3 nodes found, recommending kindnet
	I1206 19:08:11.593309   86706 start_flags.go:323] config:
	{Name:multinode-593099 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-593099 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.6 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.194 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-prov
isioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Socket
VMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 19:08:11.593574   86706 iso.go:125] acquiring lock: {Name:mk6e9c7dc90243dab7d2a6f322b4b6abe4dff6ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 19:08:11.596329   86706 out.go:177] * Starting control plane node multinode-593099 in cluster multinode-593099
	I1206 19:08:11.597745   86706 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1206 19:08:11.597777   86706 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1206 19:08:11.597785   86706 cache.go:56] Caching tarball of preloaded images
	I1206 19:08:11.597886   86706 preload.go:174] Found /home/jenkins/minikube-integration/17740-63652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1206 19:08:11.597896   86706 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1206 19:08:11.598031   86706 profile.go:148] Saving config to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/config.json ...
	I1206 19:08:11.598212   86706 start.go:365] acquiring machines lock for multinode-593099: {Name:mk49ce640266d8c664a871ed4989f65c26b6fae1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1206 19:08:11.598256   86706 start.go:369] acquired machines lock for "multinode-593099" in 26.412µs
	I1206 19:08:11.598270   86706 start.go:96] Skipping create...Using existing machine configuration
	I1206 19:08:11.598278   86706 fix.go:54] fixHost starting: 
	I1206 19:08:11.598532   86706 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:08:11.598566   86706 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:08:11.612176   86706 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40623
	I1206 19:08:11.612612   86706 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:08:11.613068   86706 main.go:141] libmachine: Using API Version  1
	I1206 19:08:11.613094   86706 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:08:11.613461   86706 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:08:11.613650   86706 main.go:141] libmachine: (multinode-593099) Calling .DriverName
	I1206 19:08:11.613786   86706 main.go:141] libmachine: (multinode-593099) Calling .GetState
	I1206 19:08:11.615201   86706 fix.go:102] recreateIfNeeded on multinode-593099: state=Running err=<nil>
	W1206 19:08:11.615222   86706 fix.go:128] unexpected machine state, will restart: <nil>
	I1206 19:08:11.618057   86706 out.go:177] * Updating the running kvm2 "multinode-593099" VM ...
	I1206 19:08:11.619499   86706 machine.go:88] provisioning docker machine ...
	I1206 19:08:11.619518   86706 main.go:141] libmachine: (multinode-593099) Calling .DriverName
	I1206 19:08:11.619722   86706 main.go:141] libmachine: (multinode-593099) Calling .GetMachineName
	I1206 19:08:11.619889   86706 buildroot.go:166] provisioning hostname "multinode-593099"
	I1206 19:08:11.619912   86706 main.go:141] libmachine: (multinode-593099) Calling .GetMachineName
	I1206 19:08:11.620052   86706 main.go:141] libmachine: (multinode-593099) Calling .GetSSHHostname
	I1206 19:08:11.622484   86706 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:08:11.622905   86706 main.go:141] libmachine: (multinode-593099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:c6", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:03:01 +0000 UTC Type:0 Mac:52:54:00:37:16:c6 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:multinode-593099 Clientid:01:52:54:00:37:16:c6}
	I1206 19:08:11.622945   86706 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined IP address 192.168.39.125 and MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:08:11.623100   86706 main.go:141] libmachine: (multinode-593099) Calling .GetSSHPort
	I1206 19:08:11.623258   86706 main.go:141] libmachine: (multinode-593099) Calling .GetSSHKeyPath
	I1206 19:08:11.623404   86706 main.go:141] libmachine: (multinode-593099) Calling .GetSSHKeyPath
	I1206 19:08:11.623552   86706 main.go:141] libmachine: (multinode-593099) Calling .GetSSHUsername
	I1206 19:08:11.623712   86706 main.go:141] libmachine: Using SSH client type: native
	I1206 19:08:11.624216   86706 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I1206 19:08:11.624236   86706 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-593099 && echo "multinode-593099" | sudo tee /etc/hostname
	I1206 19:08:30.061540   86706 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I1206 19:08:36.141576   86706 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I1206 19:08:39.213547   86706 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I1206 19:08:45.293612   86706 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I1206 19:08:48.365573   86706 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I1206 19:08:54.445597   86706 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I1206 19:08:57.517496   86706 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I1206 19:09:03.597588   86706 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I1206 19:09:06.669529   86706 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I1206 19:09:12.749565   86706 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I1206 19:09:15.821543   86706 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I1206 19:09:21.901553   86706 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I1206 19:09:24.973527   86706 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I1206 19:09:31.053558   86706 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I1206 19:09:34.125557   86706 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I1206 19:09:40.205575   86706 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I1206 19:09:43.277546   86706 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I1206 19:09:49.357512   86706 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I1206 19:09:52.429485   86706 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I1206 19:09:58.509570   86706 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I1206 19:10:01.581512   86706 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I1206 19:10:07.661558   86706 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I1206 19:10:10.733509   86706 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I1206 19:10:16.813530   86706 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I1206 19:10:19.885598   86706 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I1206 19:10:25.965571   86706 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I1206 19:10:29.037532   86706 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I1206 19:10:35.117516   86706 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I1206 19:10:38.189541   86706 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I1206 19:10:44.269516   86706 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I1206 19:10:47.341483   86706 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I1206 19:10:53.421530   86706 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I1206 19:10:56.493515   86706 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I1206 19:11:02.573536   86706 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I1206 19:11:05.645487   86706 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I1206 19:11:11.725522   86706 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I1206 19:11:14.797538   86706 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I1206 19:11:20.877552   86706 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I1206 19:11:23.949553   86706 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I1206 19:11:30.029530   86706 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I1206 19:11:33.101491   86706 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I1206 19:11:39.181562   86706 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I1206 19:11:42.253494   86706 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I1206 19:11:48.333516   86706 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I1206 19:11:51.405484   86706 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I1206 19:11:57.485565   86706 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I1206 19:12:00.557533   86706 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I1206 19:12:06.637593   86706 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I1206 19:12:09.709519   86706 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I1206 19:12:15.789560   86706 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I1206 19:12:18.861565   86706 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I1206 19:12:24.941536   86706 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I1206 19:12:28.013564   86706 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I1206 19:12:34.093519   86706 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I1206 19:12:37.165600   86706 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I1206 19:12:43.245530   86706 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I1206 19:12:46.317494   86706 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I1206 19:12:52.397550   86706 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I1206 19:12:55.469540   86706 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I1206 19:13:01.549559   86706 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.125:22: connect: no route to host
	I1206 19:13:04.551822   86706 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1206 19:13:04.551858   86706 main.go:141] libmachine: (multinode-593099) Calling .GetSSHHostname
	I1206 19:13:04.553969   86706 machine.go:91] provisioned docker machine in 4m52.934446555s
	I1206 19:13:04.554068   86706 fix.go:56] fixHost completed within 4m52.955790063s
	I1206 19:13:04.554088   86706 start.go:83] releasing machines lock for "multinode-593099", held for 4m52.955821005s
	W1206 19:13:04.554110   86706 start.go:694] error starting host: provision: host is not running
	W1206 19:13:04.554312   86706 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1206 19:13:04.554329   86706 start.go:709] Will try again in 5 seconds ...
	I1206 19:13:09.556748   86706 start.go:365] acquiring machines lock for multinode-593099: {Name:mk49ce640266d8c664a871ed4989f65c26b6fae1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1206 19:13:09.556900   86706 start.go:369] acquired machines lock for "multinode-593099" in 79.269µs
	I1206 19:13:09.556940   86706 start.go:96] Skipping create...Using existing machine configuration
	I1206 19:13:09.556949   86706 fix.go:54] fixHost starting: 
	I1206 19:13:09.557348   86706 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:13:09.557372   86706 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:13:09.572406   86706 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36561
	I1206 19:13:09.573007   86706 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:13:09.573482   86706 main.go:141] libmachine: Using API Version  1
	I1206 19:13:09.573504   86706 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:13:09.573881   86706 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:13:09.574092   86706 main.go:141] libmachine: (multinode-593099) Calling .DriverName
	I1206 19:13:09.574261   86706 main.go:141] libmachine: (multinode-593099) Calling .GetState
	I1206 19:13:09.575979   86706 fix.go:102] recreateIfNeeded on multinode-593099: state=Stopped err=<nil>
	I1206 19:13:09.576001   86706 main.go:141] libmachine: (multinode-593099) Calling .DriverName
	W1206 19:13:09.576176   86706 fix.go:128] unexpected machine state, will restart: <nil>
	I1206 19:13:09.579685   86706 out.go:177] * Restarting existing kvm2 VM for "multinode-593099" ...
	I1206 19:13:09.581542   86706 main.go:141] libmachine: (multinode-593099) Calling .Start
	I1206 19:13:09.581736   86706 main.go:141] libmachine: (multinode-593099) Ensuring networks are active...
	I1206 19:13:09.582657   86706 main.go:141] libmachine: (multinode-593099) Ensuring network default is active
	I1206 19:13:09.583076   86706 main.go:141] libmachine: (multinode-593099) Ensuring network mk-multinode-593099 is active
	I1206 19:13:09.583414   86706 main.go:141] libmachine: (multinode-593099) Getting domain xml...
	I1206 19:13:09.584113   86706 main.go:141] libmachine: (multinode-593099) Creating domain...
	I1206 19:13:10.819010   86706 main.go:141] libmachine: (multinode-593099) Waiting to get IP...
	I1206 19:13:10.820210   86706 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:13:10.820672   86706 main.go:141] libmachine: (multinode-593099) DBG | unable to find current IP address of domain multinode-593099 in network mk-multinode-593099
	I1206 19:13:10.820810   86706 main.go:141] libmachine: (multinode-593099) DBG | I1206 19:13:10.820678   87535 retry.go:31] will retry after 252.058407ms: waiting for machine to come up
	I1206 19:13:11.074227   86706 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:13:11.074837   86706 main.go:141] libmachine: (multinode-593099) DBG | unable to find current IP address of domain multinode-593099 in network mk-multinode-593099
	I1206 19:13:11.074869   86706 main.go:141] libmachine: (multinode-593099) DBG | I1206 19:13:11.074775   87535 retry.go:31] will retry after 380.938757ms: waiting for machine to come up
	I1206 19:13:11.457681   86706 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:13:11.458141   86706 main.go:141] libmachine: (multinode-593099) DBG | unable to find current IP address of domain multinode-593099 in network mk-multinode-593099
	I1206 19:13:11.458184   86706 main.go:141] libmachine: (multinode-593099) DBG | I1206 19:13:11.458075   87535 retry.go:31] will retry after 469.276631ms: waiting for machine to come up
	I1206 19:13:11.928664   86706 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:13:11.929204   86706 main.go:141] libmachine: (multinode-593099) DBG | unable to find current IP address of domain multinode-593099 in network mk-multinode-593099
	I1206 19:13:11.929257   86706 main.go:141] libmachine: (multinode-593099) DBG | I1206 19:13:11.929135   87535 retry.go:31] will retry after 514.950721ms: waiting for machine to come up
	I1206 19:13:12.445946   86706 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:13:12.446494   86706 main.go:141] libmachine: (multinode-593099) DBG | unable to find current IP address of domain multinode-593099 in network mk-multinode-593099
	I1206 19:13:12.446532   86706 main.go:141] libmachine: (multinode-593099) DBG | I1206 19:13:12.446432   87535 retry.go:31] will retry after 720.970433ms: waiting for machine to come up
	I1206 19:13:13.169530   86706 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:13:13.170029   86706 main.go:141] libmachine: (multinode-593099) DBG | unable to find current IP address of domain multinode-593099 in network mk-multinode-593099
	I1206 19:13:13.170060   86706 main.go:141] libmachine: (multinode-593099) DBG | I1206 19:13:13.169962   87535 retry.go:31] will retry after 848.393622ms: waiting for machine to come up
	I1206 19:13:14.020116   86706 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:13:14.020596   86706 main.go:141] libmachine: (multinode-593099) DBG | unable to find current IP address of domain multinode-593099 in network mk-multinode-593099
	I1206 19:13:14.020623   86706 main.go:141] libmachine: (multinode-593099) DBG | I1206 19:13:14.020553   87535 retry.go:31] will retry after 755.028173ms: waiting for machine to come up
	I1206 19:13:14.776980   86706 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:13:14.777411   86706 main.go:141] libmachine: (multinode-593099) DBG | unable to find current IP address of domain multinode-593099 in network mk-multinode-593099
	I1206 19:13:14.777435   86706 main.go:141] libmachine: (multinode-593099) DBG | I1206 19:13:14.777362   87535 retry.go:31] will retry after 1.138083009s: waiting for machine to come up
	I1206 19:13:15.917752   86706 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:13:15.918254   86706 main.go:141] libmachine: (multinode-593099) DBG | unable to find current IP address of domain multinode-593099 in network mk-multinode-593099
	I1206 19:13:15.918290   86706 main.go:141] libmachine: (multinode-593099) DBG | I1206 19:13:15.918192   87535 retry.go:31] will retry after 1.504793405s: waiting for machine to come up
	I1206 19:13:17.424815   86706 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:13:17.425170   86706 main.go:141] libmachine: (multinode-593099) DBG | unable to find current IP address of domain multinode-593099 in network mk-multinode-593099
	I1206 19:13:17.425220   86706 main.go:141] libmachine: (multinode-593099) DBG | I1206 19:13:17.425103   87535 retry.go:31] will retry after 1.751077268s: waiting for machine to come up
	I1206 19:13:19.179111   86706 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:13:19.179550   86706 main.go:141] libmachine: (multinode-593099) DBG | unable to find current IP address of domain multinode-593099 in network mk-multinode-593099
	I1206 19:13:19.179589   86706 main.go:141] libmachine: (multinode-593099) DBG | I1206 19:13:19.179495   87535 retry.go:31] will retry after 2.903823012s: waiting for machine to come up
	I1206 19:13:22.086317   86706 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:13:22.086791   86706 main.go:141] libmachine: (multinode-593099) DBG | unable to find current IP address of domain multinode-593099 in network mk-multinode-593099
	I1206 19:13:22.086816   86706 main.go:141] libmachine: (multinode-593099) DBG | I1206 19:13:22.086743   87535 retry.go:31] will retry after 2.225285788s: waiting for machine to come up
	I1206 19:13:24.313346   86706 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:13:24.313790   86706 main.go:141] libmachine: (multinode-593099) DBG | unable to find current IP address of domain multinode-593099 in network mk-multinode-593099
	I1206 19:13:24.313819   86706 main.go:141] libmachine: (multinode-593099) DBG | I1206 19:13:24.313746   87535 retry.go:31] will retry after 2.950249303s: waiting for machine to come up
	I1206 19:13:27.266427   86706 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:13:27.266890   86706 main.go:141] libmachine: (multinode-593099) DBG | unable to find current IP address of domain multinode-593099 in network mk-multinode-593099
	I1206 19:13:27.266913   86706 main.go:141] libmachine: (multinode-593099) DBG | I1206 19:13:27.266851   87535 retry.go:31] will retry after 4.145582339s: waiting for machine to come up
	I1206 19:13:31.415288   86706 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:13:31.415730   86706 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has current primary IP address 192.168.39.125 and MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:13:31.415751   86706 main.go:141] libmachine: (multinode-593099) Found IP for machine: 192.168.39.125
	I1206 19:13:31.415767   86706 main.go:141] libmachine: (multinode-593099) Reserving static IP address...
	I1206 19:13:31.416297   86706 main.go:141] libmachine: (multinode-593099) DBG | found host DHCP lease matching {name: "multinode-593099", mac: "52:54:00:37:16:c6", ip: "192.168.39.125"} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:13:22 +0000 UTC Type:0 Mac:52:54:00:37:16:c6 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:multinode-593099 Clientid:01:52:54:00:37:16:c6}
	I1206 19:13:31.416334   86706 main.go:141] libmachine: (multinode-593099) Reserved static IP address: 192.168.39.125
	I1206 19:13:31.416363   86706 main.go:141] libmachine: (multinode-593099) DBG | skip adding static IP to network mk-multinode-593099 - found existing host DHCP lease matching {name: "multinode-593099", mac: "52:54:00:37:16:c6", ip: "192.168.39.125"}
	I1206 19:13:31.416383   86706 main.go:141] libmachine: (multinode-593099) DBG | Getting to WaitForSSH function...
	I1206 19:13:31.416399   86706 main.go:141] libmachine: (multinode-593099) Waiting for SSH to be available...
	I1206 19:13:31.418710   86706 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:13:31.419096   86706 main.go:141] libmachine: (multinode-593099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:c6", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:13:22 +0000 UTC Type:0 Mac:52:54:00:37:16:c6 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:multinode-593099 Clientid:01:52:54:00:37:16:c6}
	I1206 19:13:31.419134   86706 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined IP address 192.168.39.125 and MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:13:31.419269   86706 main.go:141] libmachine: (multinode-593099) DBG | Using SSH client type: external
	I1206 19:13:31.419297   86706 main.go:141] libmachine: (multinode-593099) DBG | Using SSH private key: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/multinode-593099/id_rsa (-rw-------)
	I1206 19:13:31.419335   86706 main.go:141] libmachine: (multinode-593099) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.125 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17740-63652/.minikube/machines/multinode-593099/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1206 19:13:31.419348   86706 main.go:141] libmachine: (multinode-593099) DBG | About to run SSH command:
	I1206 19:13:31.419359   86706 main.go:141] libmachine: (multinode-593099) DBG | exit 0
	I1206 19:13:31.504826   86706 main.go:141] libmachine: (multinode-593099) DBG | SSH cmd err, output: <nil>: 
	I1206 19:13:31.505182   86706 main.go:141] libmachine: (multinode-593099) Calling .GetConfigRaw
	I1206 19:13:31.505850   86706 main.go:141] libmachine: (multinode-593099) Calling .GetIP
	I1206 19:13:31.508328   86706 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:13:31.508689   86706 main.go:141] libmachine: (multinode-593099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:c6", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:13:22 +0000 UTC Type:0 Mac:52:54:00:37:16:c6 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:multinode-593099 Clientid:01:52:54:00:37:16:c6}
	I1206 19:13:31.508722   86706 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined IP address 192.168.39.125 and MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:13:31.509032   86706 profile.go:148] Saving config to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/config.json ...
	I1206 19:13:31.509256   86706 machine.go:88] provisioning docker machine ...
	I1206 19:13:31.509280   86706 main.go:141] libmachine: (multinode-593099) Calling .DriverName
	I1206 19:13:31.509486   86706 main.go:141] libmachine: (multinode-593099) Calling .GetMachineName
	I1206 19:13:31.511536   86706 buildroot.go:166] provisioning hostname "multinode-593099"
	I1206 19:13:31.511556   86706 main.go:141] libmachine: (multinode-593099) Calling .GetMachineName
	I1206 19:13:31.511730   86706 main.go:141] libmachine: (multinode-593099) Calling .GetSSHHostname
	I1206 19:13:31.513976   86706 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:13:31.514274   86706 main.go:141] libmachine: (multinode-593099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:c6", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:13:22 +0000 UTC Type:0 Mac:52:54:00:37:16:c6 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:multinode-593099 Clientid:01:52:54:00:37:16:c6}
	I1206 19:13:31.514299   86706 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined IP address 192.168.39.125 and MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:13:31.514403   86706 main.go:141] libmachine: (multinode-593099) Calling .GetSSHPort
	I1206 19:13:31.514586   86706 main.go:141] libmachine: (multinode-593099) Calling .GetSSHKeyPath
	I1206 19:13:31.514758   86706 main.go:141] libmachine: (multinode-593099) Calling .GetSSHKeyPath
	I1206 19:13:31.514903   86706 main.go:141] libmachine: (multinode-593099) Calling .GetSSHUsername
	I1206 19:13:31.515060   86706 main.go:141] libmachine: Using SSH client type: native
	I1206 19:13:31.515560   86706 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I1206 19:13:31.515580   86706 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-593099 && echo "multinode-593099" | sudo tee /etc/hostname
	I1206 19:13:31.642187   86706 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-593099
	
	I1206 19:13:31.642224   86706 main.go:141] libmachine: (multinode-593099) Calling .GetSSHHostname
	I1206 19:13:31.645157   86706 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:13:31.645574   86706 main.go:141] libmachine: (multinode-593099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:c6", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:13:22 +0000 UTC Type:0 Mac:52:54:00:37:16:c6 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:multinode-593099 Clientid:01:52:54:00:37:16:c6}
	I1206 19:13:31.645607   86706 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined IP address 192.168.39.125 and MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:13:31.645748   86706 main.go:141] libmachine: (multinode-593099) Calling .GetSSHPort
	I1206 19:13:31.645993   86706 main.go:141] libmachine: (multinode-593099) Calling .GetSSHKeyPath
	I1206 19:13:31.646173   86706 main.go:141] libmachine: (multinode-593099) Calling .GetSSHKeyPath
	I1206 19:13:31.646299   86706 main.go:141] libmachine: (multinode-593099) Calling .GetSSHUsername
	I1206 19:13:31.646486   86706 main.go:141] libmachine: Using SSH client type: native
	I1206 19:13:31.646986   86706 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I1206 19:13:31.647018   86706 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-593099' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-593099/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-593099' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 19:13:31.769572   86706 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1206 19:13:31.769610   86706 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17740-63652/.minikube CaCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17740-63652/.minikube}
	I1206 19:13:31.769648   86706 buildroot.go:174] setting up certificates
	I1206 19:13:31.769660   86706 provision.go:83] configureAuth start
	I1206 19:13:31.769676   86706 main.go:141] libmachine: (multinode-593099) Calling .GetMachineName
	I1206 19:13:31.769968   86706 main.go:141] libmachine: (multinode-593099) Calling .GetIP
	I1206 19:13:31.772621   86706 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:13:31.773055   86706 main.go:141] libmachine: (multinode-593099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:c6", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:13:22 +0000 UTC Type:0 Mac:52:54:00:37:16:c6 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:multinode-593099 Clientid:01:52:54:00:37:16:c6}
	I1206 19:13:31.773092   86706 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined IP address 192.168.39.125 and MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:13:31.773225   86706 main.go:141] libmachine: (multinode-593099) Calling .GetSSHHostname
	I1206 19:13:31.775757   86706 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:13:31.776144   86706 main.go:141] libmachine: (multinode-593099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:c6", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:13:22 +0000 UTC Type:0 Mac:52:54:00:37:16:c6 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:multinode-593099 Clientid:01:52:54:00:37:16:c6}
	I1206 19:13:31.776175   86706 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined IP address 192.168.39.125 and MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:13:31.776314   86706 provision.go:138] copyHostCerts
	I1206 19:13:31.776354   86706 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem
	I1206 19:13:31.776393   86706 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem, removing ...
	I1206 19:13:31.776403   86706 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem
	I1206 19:13:31.776488   86706 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem (1082 bytes)
	I1206 19:13:31.776602   86706 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem
	I1206 19:13:31.776629   86706 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem, removing ...
	I1206 19:13:31.776640   86706 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem
	I1206 19:13:31.776681   86706 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem (1123 bytes)
	I1206 19:13:31.776807   86706 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem
	I1206 19:13:31.776838   86706 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem, removing ...
	I1206 19:13:31.776849   86706 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem
	I1206 19:13:31.776891   86706 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem (1679 bytes)
	I1206 19:13:31.776978   86706 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem org=jenkins.multinode-593099 san=[192.168.39.125 192.168.39.125 localhost 127.0.0.1 minikube multinode-593099]
	I1206 19:13:31.967248   86706 provision.go:172] copyRemoteCerts
	I1206 19:13:31.967327   86706 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 19:13:31.967362   86706 main.go:141] libmachine: (multinode-593099) Calling .GetSSHHostname
	I1206 19:13:31.970326   86706 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:13:31.970704   86706 main.go:141] libmachine: (multinode-593099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:c6", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:13:22 +0000 UTC Type:0 Mac:52:54:00:37:16:c6 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:multinode-593099 Clientid:01:52:54:00:37:16:c6}
	I1206 19:13:31.970732   86706 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined IP address 192.168.39.125 and MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:13:31.970919   86706 main.go:141] libmachine: (multinode-593099) Calling .GetSSHPort
	I1206 19:13:31.971102   86706 main.go:141] libmachine: (multinode-593099) Calling .GetSSHKeyPath
	I1206 19:13:31.971297   86706 main.go:141] libmachine: (multinode-593099) Calling .GetSSHUsername
	I1206 19:13:31.971410   86706 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/multinode-593099/id_rsa Username:docker}
	I1206 19:13:32.058545   86706 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1206 19:13:32.058622   86706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 19:13:32.080747   86706 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1206 19:13:32.080817   86706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1206 19:13:32.102127   86706 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1206 19:13:32.102196   86706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 19:13:32.123587   86706 provision.go:86] duration metric: configureAuth took 353.906638ms
	I1206 19:13:32.123616   86706 buildroot.go:189] setting minikube options for container-runtime
	I1206 19:13:32.123892   86706 config.go:182] Loaded profile config "multinode-593099": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 19:13:32.123985   86706 main.go:141] libmachine: (multinode-593099) Calling .GetSSHHostname
	I1206 19:13:32.126770   86706 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:13:32.127107   86706 main.go:141] libmachine: (multinode-593099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:c6", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:13:22 +0000 UTC Type:0 Mac:52:54:00:37:16:c6 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:multinode-593099 Clientid:01:52:54:00:37:16:c6}
	I1206 19:13:32.127132   86706 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined IP address 192.168.39.125 and MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:13:32.127333   86706 main.go:141] libmachine: (multinode-593099) Calling .GetSSHPort
	I1206 19:13:32.127503   86706 main.go:141] libmachine: (multinode-593099) Calling .GetSSHKeyPath
	I1206 19:13:32.127708   86706 main.go:141] libmachine: (multinode-593099) Calling .GetSSHKeyPath
	I1206 19:13:32.127876   86706 main.go:141] libmachine: (multinode-593099) Calling .GetSSHUsername
	I1206 19:13:32.128054   86706 main.go:141] libmachine: Using SSH client type: native
	I1206 19:13:32.128369   86706 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I1206 19:13:32.128384   86706 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 19:13:32.431317   86706 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 19:13:32.431348   86706 machine.go:91] provisioned docker machine in 922.07755ms
	I1206 19:13:32.431359   86706 start.go:300] post-start starting for "multinode-593099" (driver="kvm2")
	I1206 19:13:32.431399   86706 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 19:13:32.431439   86706 main.go:141] libmachine: (multinode-593099) Calling .DriverName
	I1206 19:13:32.431781   86706 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 19:13:32.431824   86706 main.go:141] libmachine: (multinode-593099) Calling .GetSSHHostname
	I1206 19:13:32.434563   86706 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:13:32.434917   86706 main.go:141] libmachine: (multinode-593099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:c6", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:13:22 +0000 UTC Type:0 Mac:52:54:00:37:16:c6 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:multinode-593099 Clientid:01:52:54:00:37:16:c6}
	I1206 19:13:32.434949   86706 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined IP address 192.168.39.125 and MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:13:32.435072   86706 main.go:141] libmachine: (multinode-593099) Calling .GetSSHPort
	I1206 19:13:32.435259   86706 main.go:141] libmachine: (multinode-593099) Calling .GetSSHKeyPath
	I1206 19:13:32.435401   86706 main.go:141] libmachine: (multinode-593099) Calling .GetSSHUsername
	I1206 19:13:32.435503   86706 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/multinode-593099/id_rsa Username:docker}
	I1206 19:13:32.519544   86706 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 19:13:32.523540   86706 command_runner.go:130] > NAME=Buildroot
	I1206 19:13:32.523561   86706 command_runner.go:130] > VERSION=2021.02.12-1-gf888a99-dirty
	I1206 19:13:32.523568   86706 command_runner.go:130] > ID=buildroot
	I1206 19:13:32.523576   86706 command_runner.go:130] > VERSION_ID=2021.02.12
	I1206 19:13:32.523582   86706 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1206 19:13:32.523725   86706 info.go:137] Remote host: Buildroot 2021.02.12
	I1206 19:13:32.523743   86706 filesync.go:126] Scanning /home/jenkins/minikube-integration/17740-63652/.minikube/addons for local assets ...
	I1206 19:13:32.523815   86706 filesync.go:126] Scanning /home/jenkins/minikube-integration/17740-63652/.minikube/files for local assets ...
	I1206 19:13:32.523885   86706 filesync.go:149] local asset: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem -> 708342.pem in /etc/ssl/certs
	I1206 19:13:32.523894   86706 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem -> /etc/ssl/certs/708342.pem
	I1206 19:13:32.523970   86706 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 19:13:32.533272   86706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem --> /etc/ssl/certs/708342.pem (1708 bytes)
	I1206 19:13:32.555852   86706 start.go:303] post-start completed in 124.475563ms
	I1206 19:13:32.555882   86706 fix.go:56] fixHost completed within 22.998932194s
	I1206 19:13:32.555909   86706 main.go:141] libmachine: (multinode-593099) Calling .GetSSHHostname
	I1206 19:13:32.558597   86706 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:13:32.558993   86706 main.go:141] libmachine: (multinode-593099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:c6", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:13:22 +0000 UTC Type:0 Mac:52:54:00:37:16:c6 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:multinode-593099 Clientid:01:52:54:00:37:16:c6}
	I1206 19:13:32.559020   86706 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined IP address 192.168.39.125 and MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:13:32.559167   86706 main.go:141] libmachine: (multinode-593099) Calling .GetSSHPort
	I1206 19:13:32.559377   86706 main.go:141] libmachine: (multinode-593099) Calling .GetSSHKeyPath
	I1206 19:13:32.559548   86706 main.go:141] libmachine: (multinode-593099) Calling .GetSSHKeyPath
	I1206 19:13:32.559680   86706 main.go:141] libmachine: (multinode-593099) Calling .GetSSHUsername
	I1206 19:13:32.559870   86706 main.go:141] libmachine: Using SSH client type: native
	I1206 19:13:32.560344   86706 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I1206 19:13:32.560360   86706 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1206 19:13:32.673796   86706 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701890012.622139246
	
	I1206 19:13:32.673821   86706 fix.go:206] guest clock: 1701890012.622139246
	I1206 19:13:32.673831   86706 fix.go:219] Guest: 2023-12-06 19:13:32.622139246 +0000 UTC Remote: 2023-12-06 19:13:32.555887078 +0000 UTC m=+321.101238831 (delta=66.252168ms)
	I1206 19:13:32.673855   86706 fix.go:190] guest clock delta is within tolerance: 66.252168ms
	I1206 19:13:32.673860   86706 start.go:83] releasing machines lock for "multinode-593099", held for 23.116952249s
	I1206 19:13:32.673884   86706 main.go:141] libmachine: (multinode-593099) Calling .DriverName
	I1206 19:13:32.674161   86706 main.go:141] libmachine: (multinode-593099) Calling .GetIP
	I1206 19:13:32.676525   86706 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:13:32.676882   86706 main.go:141] libmachine: (multinode-593099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:c6", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:13:22 +0000 UTC Type:0 Mac:52:54:00:37:16:c6 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:multinode-593099 Clientid:01:52:54:00:37:16:c6}
	I1206 19:13:32.676916   86706 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined IP address 192.168.39.125 and MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:13:32.677076   86706 main.go:141] libmachine: (multinode-593099) Calling .DriverName
	I1206 19:13:32.677575   86706 main.go:141] libmachine: (multinode-593099) Calling .DriverName
	I1206 19:13:32.677759   86706 main.go:141] libmachine: (multinode-593099) Calling .DriverName
	I1206 19:13:32.677840   86706 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 19:13:32.677895   86706 main.go:141] libmachine: (multinode-593099) Calling .GetSSHHostname
	I1206 19:13:32.677971   86706 ssh_runner.go:195] Run: cat /version.json
	I1206 19:13:32.677998   86706 main.go:141] libmachine: (multinode-593099) Calling .GetSSHHostname
	I1206 19:13:32.680203   86706 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:13:32.680533   86706 main.go:141] libmachine: (multinode-593099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:c6", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:13:22 +0000 UTC Type:0 Mac:52:54:00:37:16:c6 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:multinode-593099 Clientid:01:52:54:00:37:16:c6}
	I1206 19:13:32.680560   86706 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined IP address 192.168.39.125 and MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:13:32.680583   86706 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:13:32.680736   86706 main.go:141] libmachine: (multinode-593099) Calling .GetSSHPort
	I1206 19:13:32.680935   86706 main.go:141] libmachine: (multinode-593099) Calling .GetSSHKeyPath
	I1206 19:13:32.681085   86706 main.go:141] libmachine: (multinode-593099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:c6", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:13:22 +0000 UTC Type:0 Mac:52:54:00:37:16:c6 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:multinode-593099 Clientid:01:52:54:00:37:16:c6}
	I1206 19:13:32.681107   86706 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined IP address 192.168.39.125 and MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:13:32.681114   86706 main.go:141] libmachine: (multinode-593099) Calling .GetSSHUsername
	I1206 19:13:32.681265   86706 main.go:141] libmachine: (multinode-593099) Calling .GetSSHPort
	I1206 19:13:32.681328   86706 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/multinode-593099/id_rsa Username:docker}
	I1206 19:13:32.681432   86706 main.go:141] libmachine: (multinode-593099) Calling .GetSSHKeyPath
	I1206 19:13:32.681590   86706 main.go:141] libmachine: (multinode-593099) Calling .GetSSHUsername
	I1206 19:13:32.681719   86706 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/multinode-593099/id_rsa Username:docker}
	I1206 19:13:32.762864   86706 command_runner.go:130] > {"iso_version": "v1.32.1-1701387192-17703", "kicbase_version": "v0.0.42-1700142204-17634", "minikube_version": "v1.32.0", "commit": "196015715c4eb12e436d5bb69e555ba604cda88e"}
	I1206 19:13:32.763090   86706 ssh_runner.go:195] Run: systemctl --version
	I1206 19:13:32.790853   86706 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1206 19:13:32.790901   86706 command_runner.go:130] > systemd 247 (247)
	I1206 19:13:32.790927   86706 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1206 19:13:32.790993   86706 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 19:13:32.938023   86706 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1206 19:13:32.944261   86706 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1206 19:13:32.944312   86706 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 19:13:32.944413   86706 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 19:13:32.960240   86706 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1206 19:13:32.960666   86706 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1206 19:13:32.960687   86706 start.go:475] detecting cgroup driver to use...
	I1206 19:13:32.960756   86706 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 19:13:32.977107   86706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 19:13:32.988934   86706 docker.go:203] disabling cri-docker service (if available) ...
	I1206 19:13:32.989016   86706 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 19:13:33.000803   86706 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 19:13:33.013109   86706 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 19:13:33.026656   86706 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I1206 19:13:33.117716   86706 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 19:13:33.236579   86706 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1206 19:13:33.236629   86706 docker.go:219] disabling docker service ...
	I1206 19:13:33.236690   86706 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 19:13:33.250696   86706 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 19:13:33.261622   86706 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I1206 19:13:33.261759   86706 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 19:13:33.275375   86706 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1206 19:13:33.363124   86706 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 19:13:33.463282   86706 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I1206 19:13:33.463348   86706 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1206 19:13:33.463426   86706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 19:13:33.477092   86706 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 19:13:33.494603   86706 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1206 19:13:33.494666   86706 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1206 19:13:33.494727   86706 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:13:33.503704   86706 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 19:13:33.503769   86706 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:13:33.512780   86706 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:13:33.521815   86706 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:13:33.531209   86706 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 19:13:33.540472   86706 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 19:13:33.548140   86706 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1206 19:13:33.548172   86706 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1206 19:13:33.548216   86706 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1206 19:13:33.560484   86706 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 19:13:33.568594   86706 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 19:13:33.668289   86706 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 19:13:33.834863   86706 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 19:13:33.834942   86706 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 19:13:33.839490   86706 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1206 19:13:33.839518   86706 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1206 19:13:33.839538   86706 command_runner.go:130] > Device: 16h/22d	Inode: 740         Links: 1
	I1206 19:13:33.839548   86706 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1206 19:13:33.839556   86706 command_runner.go:130] > Access: 2023-12-06 19:13:33.767512873 +0000
	I1206 19:13:33.839568   86706 command_runner.go:130] > Modify: 2023-12-06 19:13:33.767512873 +0000
	I1206 19:13:33.839575   86706 command_runner.go:130] > Change: 2023-12-06 19:13:33.767512873 +0000
	I1206 19:13:33.839579   86706 command_runner.go:130] >  Birth: -
	I1206 19:13:33.839760   86706 start.go:543] Will wait 60s for crictl version
	I1206 19:13:33.839825   86706 ssh_runner.go:195] Run: which crictl
	I1206 19:13:33.843878   86706 command_runner.go:130] > /usr/bin/crictl
	I1206 19:13:33.843947   86706 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1206 19:13:33.884777   86706 command_runner.go:130] > Version:  0.1.0
	I1206 19:13:33.884796   86706 command_runner.go:130] > RuntimeName:  cri-o
	I1206 19:13:33.884806   86706 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1206 19:13:33.884812   86706 command_runner.go:130] > RuntimeApiVersion:  v1
	I1206 19:13:33.884845   86706 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1206 19:13:33.884914   86706 ssh_runner.go:195] Run: crio --version
	I1206 19:13:33.935592   86706 command_runner.go:130] > crio version 1.24.1
	I1206 19:13:33.935616   86706 command_runner.go:130] > Version:          1.24.1
	I1206 19:13:33.935623   86706 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1206 19:13:33.935627   86706 command_runner.go:130] > GitTreeState:     dirty
	I1206 19:13:33.935634   86706 command_runner.go:130] > BuildDate:        2023-12-01T05:08:03Z
	I1206 19:13:33.935638   86706 command_runner.go:130] > GoVersion:        go1.19.9
	I1206 19:13:33.935642   86706 command_runner.go:130] > Compiler:         gc
	I1206 19:13:33.935647   86706 command_runner.go:130] > Platform:         linux/amd64
	I1206 19:13:33.935653   86706 command_runner.go:130] > Linkmode:         dynamic
	I1206 19:13:33.935659   86706 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1206 19:13:33.935663   86706 command_runner.go:130] > SeccompEnabled:   true
	I1206 19:13:33.935668   86706 command_runner.go:130] > AppArmorEnabled:  false
	I1206 19:13:33.936986   86706 ssh_runner.go:195] Run: crio --version
	I1206 19:13:33.985378   86706 command_runner.go:130] > crio version 1.24.1
	I1206 19:13:33.985401   86706 command_runner.go:130] > Version:          1.24.1
	I1206 19:13:33.985411   86706 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1206 19:13:33.985423   86706 command_runner.go:130] > GitTreeState:     dirty
	I1206 19:13:33.985432   86706 command_runner.go:130] > BuildDate:        2023-12-01T05:08:03Z
	I1206 19:13:33.985438   86706 command_runner.go:130] > GoVersion:        go1.19.9
	I1206 19:13:33.985444   86706 command_runner.go:130] > Compiler:         gc
	I1206 19:13:33.985451   86706 command_runner.go:130] > Platform:         linux/amd64
	I1206 19:13:33.985463   86706 command_runner.go:130] > Linkmode:         dynamic
	I1206 19:13:33.985476   86706 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1206 19:13:33.985485   86706 command_runner.go:130] > SeccompEnabled:   true
	I1206 19:13:33.985494   86706 command_runner.go:130] > AppArmorEnabled:  false
	I1206 19:13:33.987709   86706 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1206 19:13:33.988985   86706 main.go:141] libmachine: (multinode-593099) Calling .GetIP
	I1206 19:13:33.991920   86706 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:13:33.992304   86706 main.go:141] libmachine: (multinode-593099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:c6", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:13:22 +0000 UTC Type:0 Mac:52:54:00:37:16:c6 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:multinode-593099 Clientid:01:52:54:00:37:16:c6}
	I1206 19:13:33.992355   86706 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined IP address 192.168.39.125 and MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:13:33.992573   86706 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1206 19:13:33.996648   86706 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 19:13:34.009096   86706 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1206 19:13:34.009154   86706 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 19:13:34.043799   86706 command_runner.go:130] > {
	I1206 19:13:34.043831   86706 command_runner.go:130] >   "images": [
	I1206 19:13:34.043837   86706 command_runner.go:130] >     {
	I1206 19:13:34.043850   86706 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1206 19:13:34.043870   86706 command_runner.go:130] >       "repoTags": [
	I1206 19:13:34.043884   86706 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1206 19:13:34.043893   86706 command_runner.go:130] >       ],
	I1206 19:13:34.043903   86706 command_runner.go:130] >       "repoDigests": [
	I1206 19:13:34.043919   86706 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1206 19:13:34.043933   86706 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1206 19:13:34.043942   86706 command_runner.go:130] >       ],
	I1206 19:13:34.043952   86706 command_runner.go:130] >       "size": "750414",
	I1206 19:13:34.043961   86706 command_runner.go:130] >       "uid": {
	I1206 19:13:34.043971   86706 command_runner.go:130] >         "value": "65535"
	I1206 19:13:34.043981   86706 command_runner.go:130] >       },
	I1206 19:13:34.043991   86706 command_runner.go:130] >       "username": "",
	I1206 19:13:34.044012   86706 command_runner.go:130] >       "spec": null,
	I1206 19:13:34.044021   86706 command_runner.go:130] >       "pinned": false
	I1206 19:13:34.044030   86706 command_runner.go:130] >     }
	I1206 19:13:34.044038   86706 command_runner.go:130] >   ]
	I1206 19:13:34.044047   86706 command_runner.go:130] > }
	I1206 19:13:34.045106   86706 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1206 19:13:34.045189   86706 ssh_runner.go:195] Run: which lz4
	I1206 19:13:34.048747   86706 command_runner.go:130] > /usr/bin/lz4
	I1206 19:13:34.049056   86706 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1206 19:13:34.049158   86706 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1206 19:13:34.053039   86706 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1206 19:13:34.053291   86706 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1206 19:13:34.053321   86706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1206 19:13:35.913953   86706 crio.go:444] Took 1.864836 seconds to copy over tarball
	I1206 19:13:35.914018   86706 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1206 19:13:38.710045   86706 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.795992263s)
	I1206 19:13:38.710103   86706 crio.go:451] Took 2.796101 seconds to extract the tarball
	I1206 19:13:38.710118   86706 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1206 19:13:38.751589   86706 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 19:13:38.798403   86706 command_runner.go:130] > {
	I1206 19:13:38.798429   86706 command_runner.go:130] >   "images": [
	I1206 19:13:38.798435   86706 command_runner.go:130] >     {
	I1206 19:13:38.798448   86706 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I1206 19:13:38.798455   86706 command_runner.go:130] >       "repoTags": [
	I1206 19:13:38.798465   86706 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1206 19:13:38.798471   86706 command_runner.go:130] >       ],
	I1206 19:13:38.798477   86706 command_runner.go:130] >       "repoDigests": [
	I1206 19:13:38.798491   86706 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1206 19:13:38.798502   86706 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I1206 19:13:38.798521   86706 command_runner.go:130] >       ],
	I1206 19:13:38.798533   86706 command_runner.go:130] >       "size": "65258016",
	I1206 19:13:38.798540   86706 command_runner.go:130] >       "uid": null,
	I1206 19:13:38.798549   86706 command_runner.go:130] >       "username": "",
	I1206 19:13:38.798557   86706 command_runner.go:130] >       "spec": null,
	I1206 19:13:38.798567   86706 command_runner.go:130] >       "pinned": false
	I1206 19:13:38.798573   86706 command_runner.go:130] >     },
	I1206 19:13:38.798580   86706 command_runner.go:130] >     {
	I1206 19:13:38.798589   86706 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1206 19:13:38.798599   86706 command_runner.go:130] >       "repoTags": [
	I1206 19:13:38.798608   86706 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1206 19:13:38.798616   86706 command_runner.go:130] >       ],
	I1206 19:13:38.798623   86706 command_runner.go:130] >       "repoDigests": [
	I1206 19:13:38.798639   86706 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1206 19:13:38.798655   86706 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1206 19:13:38.798663   86706 command_runner.go:130] >       ],
	I1206 19:13:38.798678   86706 command_runner.go:130] >       "size": "31470524",
	I1206 19:13:38.798685   86706 command_runner.go:130] >       "uid": null,
	I1206 19:13:38.798693   86706 command_runner.go:130] >       "username": "",
	I1206 19:13:38.798700   86706 command_runner.go:130] >       "spec": null,
	I1206 19:13:38.798705   86706 command_runner.go:130] >       "pinned": false
	I1206 19:13:38.798709   86706 command_runner.go:130] >     },
	I1206 19:13:38.798713   86706 command_runner.go:130] >     {
	I1206 19:13:38.798718   86706 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I1206 19:13:38.798725   86706 command_runner.go:130] >       "repoTags": [
	I1206 19:13:38.798731   86706 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1206 19:13:38.798737   86706 command_runner.go:130] >       ],
	I1206 19:13:38.798741   86706 command_runner.go:130] >       "repoDigests": [
	I1206 19:13:38.798751   86706 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I1206 19:13:38.798758   86706 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I1206 19:13:38.798765   86706 command_runner.go:130] >       ],
	I1206 19:13:38.798769   86706 command_runner.go:130] >       "size": "53621675",
	I1206 19:13:38.798776   86706 command_runner.go:130] >       "uid": null,
	I1206 19:13:38.798779   86706 command_runner.go:130] >       "username": "",
	I1206 19:13:38.798783   86706 command_runner.go:130] >       "spec": null,
	I1206 19:13:38.798787   86706 command_runner.go:130] >       "pinned": false
	I1206 19:13:38.798793   86706 command_runner.go:130] >     },
	I1206 19:13:38.798797   86706 command_runner.go:130] >     {
	I1206 19:13:38.798803   86706 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I1206 19:13:38.798810   86706 command_runner.go:130] >       "repoTags": [
	I1206 19:13:38.798815   86706 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1206 19:13:38.798819   86706 command_runner.go:130] >       ],
	I1206 19:13:38.798825   86706 command_runner.go:130] >       "repoDigests": [
	I1206 19:13:38.798834   86706 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I1206 19:13:38.798843   86706 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I1206 19:13:38.798852   86706 command_runner.go:130] >       ],
	I1206 19:13:38.798859   86706 command_runner.go:130] >       "size": "295456551",
	I1206 19:13:38.798863   86706 command_runner.go:130] >       "uid": {
	I1206 19:13:38.798870   86706 command_runner.go:130] >         "value": "0"
	I1206 19:13:38.798873   86706 command_runner.go:130] >       },
	I1206 19:13:38.798877   86706 command_runner.go:130] >       "username": "",
	I1206 19:13:38.798881   86706 command_runner.go:130] >       "spec": null,
	I1206 19:13:38.798888   86706 command_runner.go:130] >       "pinned": false
	I1206 19:13:38.798891   86706 command_runner.go:130] >     },
	I1206 19:13:38.798897   86706 command_runner.go:130] >     {
	I1206 19:13:38.798906   86706 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I1206 19:13:38.798910   86706 command_runner.go:130] >       "repoTags": [
	I1206 19:13:38.798916   86706 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I1206 19:13:38.798922   86706 command_runner.go:130] >       ],
	I1206 19:13:38.798926   86706 command_runner.go:130] >       "repoDigests": [
	I1206 19:13:38.798936   86706 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I1206 19:13:38.798944   86706 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I1206 19:13:38.798950   86706 command_runner.go:130] >       ],
	I1206 19:13:38.798954   86706 command_runner.go:130] >       "size": "127226832",
	I1206 19:13:38.798960   86706 command_runner.go:130] >       "uid": {
	I1206 19:13:38.798964   86706 command_runner.go:130] >         "value": "0"
	I1206 19:13:38.798971   86706 command_runner.go:130] >       },
	I1206 19:13:38.798975   86706 command_runner.go:130] >       "username": "",
	I1206 19:13:38.798981   86706 command_runner.go:130] >       "spec": null,
	I1206 19:13:38.798985   86706 command_runner.go:130] >       "pinned": false
	I1206 19:13:38.798989   86706 command_runner.go:130] >     },
	I1206 19:13:38.798993   86706 command_runner.go:130] >     {
	I1206 19:13:38.799000   86706 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I1206 19:13:38.799007   86706 command_runner.go:130] >       "repoTags": [
	I1206 19:13:38.799013   86706 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I1206 19:13:38.799017   86706 command_runner.go:130] >       ],
	I1206 19:13:38.799021   86706 command_runner.go:130] >       "repoDigests": [
	I1206 19:13:38.799031   86706 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I1206 19:13:38.799039   86706 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I1206 19:13:38.799044   86706 command_runner.go:130] >       ],
	I1206 19:13:38.799049   86706 command_runner.go:130] >       "size": "123261750",
	I1206 19:13:38.799055   86706 command_runner.go:130] >       "uid": {
	I1206 19:13:38.799059   86706 command_runner.go:130] >         "value": "0"
	I1206 19:13:38.799063   86706 command_runner.go:130] >       },
	I1206 19:13:38.799067   86706 command_runner.go:130] >       "username": "",
	I1206 19:13:38.799074   86706 command_runner.go:130] >       "spec": null,
	I1206 19:13:38.799078   86706 command_runner.go:130] >       "pinned": false
	I1206 19:13:38.799082   86706 command_runner.go:130] >     },
	I1206 19:13:38.799085   86706 command_runner.go:130] >     {
	I1206 19:13:38.799091   86706 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I1206 19:13:38.799100   86706 command_runner.go:130] >       "repoTags": [
	I1206 19:13:38.799105   86706 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I1206 19:13:38.799111   86706 command_runner.go:130] >       ],
	I1206 19:13:38.799115   86706 command_runner.go:130] >       "repoDigests": [
	I1206 19:13:38.799122   86706 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I1206 19:13:38.799132   86706 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I1206 19:13:38.799135   86706 command_runner.go:130] >       ],
	I1206 19:13:38.799139   86706 command_runner.go:130] >       "size": "74749335",
	I1206 19:13:38.799143   86706 command_runner.go:130] >       "uid": null,
	I1206 19:13:38.799151   86706 command_runner.go:130] >       "username": "",
	I1206 19:13:38.799155   86706 command_runner.go:130] >       "spec": null,
	I1206 19:13:38.799159   86706 command_runner.go:130] >       "pinned": false
	I1206 19:13:38.799165   86706 command_runner.go:130] >     },
	I1206 19:13:38.799168   86706 command_runner.go:130] >     {
	I1206 19:13:38.799174   86706 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I1206 19:13:38.799181   86706 command_runner.go:130] >       "repoTags": [
	I1206 19:13:38.799186   86706 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I1206 19:13:38.799191   86706 command_runner.go:130] >       ],
	I1206 19:13:38.799197   86706 command_runner.go:130] >       "repoDigests": [
	I1206 19:13:38.799253   86706 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I1206 19:13:38.799269   86706 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I1206 19:13:38.799275   86706 command_runner.go:130] >       ],
	I1206 19:13:38.799282   86706 command_runner.go:130] >       "size": "61551410",
	I1206 19:13:38.799290   86706 command_runner.go:130] >       "uid": {
	I1206 19:13:38.799296   86706 command_runner.go:130] >         "value": "0"
	I1206 19:13:38.799306   86706 command_runner.go:130] >       },
	I1206 19:13:38.799312   86706 command_runner.go:130] >       "username": "",
	I1206 19:13:38.799322   86706 command_runner.go:130] >       "spec": null,
	I1206 19:13:38.799328   86706 command_runner.go:130] >       "pinned": false
	I1206 19:13:38.799341   86706 command_runner.go:130] >     },
	I1206 19:13:38.799347   86706 command_runner.go:130] >     {
	I1206 19:13:38.799358   86706 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1206 19:13:38.799368   86706 command_runner.go:130] >       "repoTags": [
	I1206 19:13:38.799375   86706 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1206 19:13:38.799384   86706 command_runner.go:130] >       ],
	I1206 19:13:38.799390   86706 command_runner.go:130] >       "repoDigests": [
	I1206 19:13:38.799410   86706 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1206 19:13:38.799425   86706 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1206 19:13:38.799434   86706 command_runner.go:130] >       ],
	I1206 19:13:38.799440   86706 command_runner.go:130] >       "size": "750414",
	I1206 19:13:38.799450   86706 command_runner.go:130] >       "uid": {
	I1206 19:13:38.799456   86706 command_runner.go:130] >         "value": "65535"
	I1206 19:13:38.799465   86706 command_runner.go:130] >       },
	I1206 19:13:38.799471   86706 command_runner.go:130] >       "username": "",
	I1206 19:13:38.799477   86706 command_runner.go:130] >       "spec": null,
	I1206 19:13:38.799487   86706 command_runner.go:130] >       "pinned": false
	I1206 19:13:38.799493   86706 command_runner.go:130] >     }
	I1206 19:13:38.799501   86706 command_runner.go:130] >   ]
	I1206 19:13:38.799506   86706 command_runner.go:130] > }
	I1206 19:13:38.800233   86706 crio.go:496] all images are preloaded for cri-o runtime.
	I1206 19:13:38.800256   86706 cache_images.go:84] Images are preloaded, skipping loading
	I1206 19:13:38.800341   86706 ssh_runner.go:195] Run: crio config
	I1206 19:13:38.855740   86706 command_runner.go:130] ! time="2023-12-06 19:13:38.803411580Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1206 19:13:38.855771   86706 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1206 19:13:38.865469   86706 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1206 19:13:38.865495   86706 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1206 19:13:38.865506   86706 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1206 19:13:38.865511   86706 command_runner.go:130] > #
	I1206 19:13:38.865522   86706 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1206 19:13:38.865531   86706 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1206 19:13:38.865540   86706 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1206 19:13:38.865551   86706 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1206 19:13:38.865557   86706 command_runner.go:130] > # reload'.
	I1206 19:13:38.865567   86706 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1206 19:13:38.865583   86706 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1206 19:13:38.865592   86706 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1206 19:13:38.865599   86706 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1206 19:13:38.865603   86706 command_runner.go:130] > [crio]
	I1206 19:13:38.865612   86706 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1206 19:13:38.865618   86706 command_runner.go:130] > # containers images, in this directory.
	I1206 19:13:38.865627   86706 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1206 19:13:38.865637   86706 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1206 19:13:38.865642   86706 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1206 19:13:38.865655   86706 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1206 19:13:38.865664   86706 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1206 19:13:38.865670   86706 command_runner.go:130] > storage_driver = "overlay"
	I1206 19:13:38.865678   86706 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1206 19:13:38.865688   86706 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1206 19:13:38.865698   86706 command_runner.go:130] > storage_option = [
	I1206 19:13:38.865706   86706 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1206 19:13:38.865715   86706 command_runner.go:130] > ]
	I1206 19:13:38.865724   86706 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1206 19:13:38.865735   86706 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1206 19:13:38.865748   86706 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1206 19:13:38.865759   86706 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1206 19:13:38.865770   86706 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1206 19:13:38.865780   86706 command_runner.go:130] > # always happen on a node reboot
	I1206 19:13:38.865788   86706 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1206 19:13:38.865814   86706 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1206 19:13:38.865826   86706 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1206 19:13:38.865845   86706 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1206 19:13:38.865858   86706 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1206 19:13:38.865873   86706 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1206 19:13:38.865888   86706 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1206 19:13:38.865898   86706 command_runner.go:130] > # internal_wipe = true
	I1206 19:13:38.865910   86706 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1206 19:13:38.865922   86706 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1206 19:13:38.865934   86706 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1206 19:13:38.865945   86706 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1206 19:13:38.865955   86706 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1206 19:13:38.865961   86706 command_runner.go:130] > [crio.api]
	I1206 19:13:38.865969   86706 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1206 19:13:38.865976   86706 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1206 19:13:38.865982   86706 command_runner.go:130] > # IP address on which the stream server will listen.
	I1206 19:13:38.865989   86706 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1206 19:13:38.865996   86706 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1206 19:13:38.866006   86706 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1206 19:13:38.866012   86706 command_runner.go:130] > # stream_port = "0"
	I1206 19:13:38.866018   86706 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1206 19:13:38.866023   86706 command_runner.go:130] > # stream_enable_tls = false
	I1206 19:13:38.866031   86706 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1206 19:13:38.866036   86706 command_runner.go:130] > # stream_idle_timeout = ""
	I1206 19:13:38.866045   86706 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1206 19:13:38.866052   86706 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1206 19:13:38.866058   86706 command_runner.go:130] > # minutes.
	I1206 19:13:38.866062   86706 command_runner.go:130] > # stream_tls_cert = ""
	I1206 19:13:38.866071   86706 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1206 19:13:38.866079   86706 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1206 19:13:38.866084   86706 command_runner.go:130] > # stream_tls_key = ""
	I1206 19:13:38.866090   86706 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1206 19:13:38.866098   86706 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1206 19:13:38.866105   86706 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1206 19:13:38.866110   86706 command_runner.go:130] > # stream_tls_ca = ""
	I1206 19:13:38.866117   86706 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1206 19:13:38.866130   86706 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1206 19:13:38.866140   86706 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1206 19:13:38.866147   86706 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1206 19:13:38.866172   86706 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1206 19:13:38.866186   86706 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1206 19:13:38.866189   86706 command_runner.go:130] > [crio.runtime]
	I1206 19:13:38.866195   86706 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1206 19:13:38.866202   86706 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1206 19:13:38.866208   86706 command_runner.go:130] > # "nofile=1024:2048"
	I1206 19:13:38.866215   86706 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1206 19:13:38.866223   86706 command_runner.go:130] > # default_ulimits = [
	I1206 19:13:38.866227   86706 command_runner.go:130] > # ]
	I1206 19:13:38.866235   86706 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1206 19:13:38.866241   86706 command_runner.go:130] > # no_pivot = false
	I1206 19:13:38.866247   86706 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1206 19:13:38.866255   86706 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1206 19:13:38.866262   86706 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1206 19:13:38.866268   86706 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1206 19:13:38.866278   86706 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1206 19:13:38.866287   86706 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1206 19:13:38.866294   86706 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1206 19:13:38.866298   86706 command_runner.go:130] > # Cgroup setting for conmon
	I1206 19:13:38.866311   86706 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1206 19:13:38.866321   86706 command_runner.go:130] > conmon_cgroup = "pod"
	I1206 19:13:38.866334   86706 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1206 19:13:38.866345   86706 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1206 19:13:38.866359   86706 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1206 19:13:38.866369   86706 command_runner.go:130] > conmon_env = [
	I1206 19:13:38.866381   86706 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1206 19:13:38.866389   86706 command_runner.go:130] > ]
	I1206 19:13:38.866398   86706 command_runner.go:130] > # Additional environment variables to set for all the
	I1206 19:13:38.866407   86706 command_runner.go:130] > # containers. These are overridden if set in the
	I1206 19:13:38.866414   86706 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1206 19:13:38.866421   86706 command_runner.go:130] > # default_env = [
	I1206 19:13:38.866425   86706 command_runner.go:130] > # ]
	I1206 19:13:38.866433   86706 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1206 19:13:38.866443   86706 command_runner.go:130] > # selinux = false
	I1206 19:13:38.866453   86706 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1206 19:13:38.866461   86706 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1206 19:13:38.866473   86706 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1206 19:13:38.866479   86706 command_runner.go:130] > # seccomp_profile = ""
	I1206 19:13:38.866485   86706 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1206 19:13:38.866493   86706 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1206 19:13:38.866499   86706 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1206 19:13:38.866506   86706 command_runner.go:130] > # which might increase security.
	I1206 19:13:38.866511   86706 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1206 19:13:38.866518   86706 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1206 19:13:38.866524   86706 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1206 19:13:38.866532   86706 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1206 19:13:38.866538   86706 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1206 19:13:38.866545   86706 command_runner.go:130] > # This option supports live configuration reload.
	I1206 19:13:38.866550   86706 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1206 19:13:38.866558   86706 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1206 19:13:38.866562   86706 command_runner.go:130] > # the cgroup blockio controller.
	I1206 19:13:38.866570   86706 command_runner.go:130] > # blockio_config_file = ""
	I1206 19:13:38.866577   86706 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1206 19:13:38.866583   86706 command_runner.go:130] > # irqbalance daemon.
	I1206 19:13:38.866588   86706 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1206 19:13:38.866597   86706 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1206 19:13:38.866602   86706 command_runner.go:130] > # This option supports live configuration reload.
	I1206 19:13:38.866607   86706 command_runner.go:130] > # rdt_config_file = ""
	I1206 19:13:38.866612   86706 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1206 19:13:38.866619   86706 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1206 19:13:38.866625   86706 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1206 19:13:38.866631   86706 command_runner.go:130] > # separate_pull_cgroup = ""
	I1206 19:13:38.866640   86706 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1206 19:13:38.866648   86706 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1206 19:13:38.866657   86706 command_runner.go:130] > # will be added.
	I1206 19:13:38.866661   86706 command_runner.go:130] > # default_capabilities = [
	I1206 19:13:38.866665   86706 command_runner.go:130] > # 	"CHOWN",
	I1206 19:13:38.866668   86706 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1206 19:13:38.866675   86706 command_runner.go:130] > # 	"FSETID",
	I1206 19:13:38.866681   86706 command_runner.go:130] > # 	"FOWNER",
	I1206 19:13:38.866692   86706 command_runner.go:130] > # 	"SETGID",
	I1206 19:13:38.866695   86706 command_runner.go:130] > # 	"SETUID",
	I1206 19:13:38.866699   86706 command_runner.go:130] > # 	"SETPCAP",
	I1206 19:13:38.866703   86706 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1206 19:13:38.866706   86706 command_runner.go:130] > # 	"KILL",
	I1206 19:13:38.866709   86706 command_runner.go:130] > # ]
	I1206 19:13:38.866715   86706 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1206 19:13:38.866721   86706 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1206 19:13:38.866725   86706 command_runner.go:130] > # default_sysctls = [
	I1206 19:13:38.866728   86706 command_runner.go:130] > # ]
	I1206 19:13:38.866733   86706 command_runner.go:130] > # List of devices on the host that a
	I1206 19:13:38.866739   86706 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1206 19:13:38.866746   86706 command_runner.go:130] > # allowed_devices = [
	I1206 19:13:38.866750   86706 command_runner.go:130] > # 	"/dev/fuse",
	I1206 19:13:38.866754   86706 command_runner.go:130] > # ]
	I1206 19:13:38.866759   86706 command_runner.go:130] > # List of additional devices. specified as
	I1206 19:13:38.866768   86706 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1206 19:13:38.866776   86706 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1206 19:13:38.866813   86706 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1206 19:13:38.866820   86706 command_runner.go:130] > # additional_devices = [
	I1206 19:13:38.866823   86706 command_runner.go:130] > # ]
	I1206 19:13:38.866828   86706 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1206 19:13:38.866834   86706 command_runner.go:130] > # cdi_spec_dirs = [
	I1206 19:13:38.866838   86706 command_runner.go:130] > # 	"/etc/cdi",
	I1206 19:13:38.866844   86706 command_runner.go:130] > # 	"/var/run/cdi",
	I1206 19:13:38.866848   86706 command_runner.go:130] > # ]
	I1206 19:13:38.866856   86706 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1206 19:13:38.866864   86706 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1206 19:13:38.866870   86706 command_runner.go:130] > # Defaults to false.
	I1206 19:13:38.866875   86706 command_runner.go:130] > # device_ownership_from_security_context = false
	I1206 19:13:38.866884   86706 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1206 19:13:38.866892   86706 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1206 19:13:38.866898   86706 command_runner.go:130] > # hooks_dir = [
	I1206 19:13:38.866903   86706 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1206 19:13:38.866909   86706 command_runner.go:130] > # ]
	I1206 19:13:38.866918   86706 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1206 19:13:38.866927   86706 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1206 19:13:38.866932   86706 command_runner.go:130] > # its default mounts from the following two files:
	I1206 19:13:38.866938   86706 command_runner.go:130] > #
	I1206 19:13:38.866944   86706 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1206 19:13:38.866953   86706 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1206 19:13:38.866960   86706 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1206 19:13:38.866966   86706 command_runner.go:130] > #
	I1206 19:13:38.866973   86706 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1206 19:13:38.866981   86706 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1206 19:13:38.866989   86706 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1206 19:13:38.866996   86706 command_runner.go:130] > #      only add mounts it finds in this file.
	I1206 19:13:38.866999   86706 command_runner.go:130] > #
	I1206 19:13:38.867006   86706 command_runner.go:130] > # default_mounts_file = ""
	I1206 19:13:38.867011   86706 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1206 19:13:38.867020   86706 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1206 19:13:38.867024   86706 command_runner.go:130] > pids_limit = 1024
	I1206 19:13:38.867033   86706 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1206 19:13:38.867044   86706 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1206 19:13:38.867052   86706 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1206 19:13:38.867062   86706 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1206 19:13:38.867068   86706 command_runner.go:130] > # log_size_max = -1
	I1206 19:13:38.867076   86706 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1206 19:13:38.867082   86706 command_runner.go:130] > # log_to_journald = false
	I1206 19:13:38.867088   86706 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1206 19:13:38.867095   86706 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1206 19:13:38.867100   86706 command_runner.go:130] > # Path to directory for container attach sockets.
	I1206 19:13:38.867108   86706 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1206 19:13:38.867113   86706 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1206 19:13:38.867123   86706 command_runner.go:130] > # bind_mount_prefix = ""
	I1206 19:13:38.867131   86706 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1206 19:13:38.867137   86706 command_runner.go:130] > # read_only = false
	I1206 19:13:38.867144   86706 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1206 19:13:38.867152   86706 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1206 19:13:38.867158   86706 command_runner.go:130] > # live configuration reload.
	I1206 19:13:38.867163   86706 command_runner.go:130] > # log_level = "info"
	I1206 19:13:38.867173   86706 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1206 19:13:38.867181   86706 command_runner.go:130] > # This option supports live configuration reload.
	I1206 19:13:38.867187   86706 command_runner.go:130] > # log_filter = ""
	I1206 19:13:38.867193   86706 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1206 19:13:38.867201   86706 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1206 19:13:38.867205   86706 command_runner.go:130] > # separated by comma.
	I1206 19:13:38.867212   86706 command_runner.go:130] > # uid_mappings = ""
	I1206 19:13:38.867218   86706 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1206 19:13:38.867226   86706 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1206 19:13:38.867232   86706 command_runner.go:130] > # separated by comma.
	I1206 19:13:38.867236   86706 command_runner.go:130] > # gid_mappings = ""
	I1206 19:13:38.867244   86706 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1206 19:13:38.867252   86706 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1206 19:13:38.867260   86706 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1206 19:13:38.867265   86706 command_runner.go:130] > # minimum_mappable_uid = -1
	I1206 19:13:38.867271   86706 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1206 19:13:38.867280   86706 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1206 19:13:38.867289   86706 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1206 19:13:38.867297   86706 command_runner.go:130] > # minimum_mappable_gid = -1
	I1206 19:13:38.867310   86706 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1206 19:13:38.867322   86706 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1206 19:13:38.867335   86706 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1206 19:13:38.867345   86706 command_runner.go:130] > # ctr_stop_timeout = 30
	I1206 19:13:38.867357   86706 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1206 19:13:38.867369   86706 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1206 19:13:38.867380   86706 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1206 19:13:38.867391   86706 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1206 19:13:38.867404   86706 command_runner.go:130] > drop_infra_ctr = false
	I1206 19:13:38.867417   86706 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1206 19:13:38.867429   86706 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1206 19:13:38.867442   86706 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1206 19:13:38.867449   86706 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1206 19:13:38.867455   86706 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1206 19:13:38.867462   86706 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1206 19:13:38.867469   86706 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1206 19:13:38.867477   86706 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1206 19:13:38.867487   86706 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1206 19:13:38.867496   86706 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1206 19:13:38.867502   86706 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1206 19:13:38.867509   86706 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1206 19:13:38.867519   86706 command_runner.go:130] > # default_runtime = "runc"
	I1206 19:13:38.867527   86706 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1206 19:13:38.867538   86706 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1206 19:13:38.867550   86706 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1206 19:13:38.867559   86706 command_runner.go:130] > # creation as a file is not desired either.
	I1206 19:13:38.867573   86706 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1206 19:13:38.867586   86706 command_runner.go:130] > # the hostname is being managed dynamically.
	I1206 19:13:38.867595   86706 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1206 19:13:38.867604   86706 command_runner.go:130] > # ]
	I1206 19:13:38.867616   86706 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1206 19:13:38.867630   86706 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1206 19:13:38.867645   86706 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1206 19:13:38.867659   86706 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1206 19:13:38.867668   86706 command_runner.go:130] > #
	I1206 19:13:38.867679   86706 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1206 19:13:38.867692   86706 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1206 19:13:38.867703   86706 command_runner.go:130] > #  runtime_type = "oci"
	I1206 19:13:38.867715   86706 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1206 19:13:38.867725   86706 command_runner.go:130] > #  privileged_without_host_devices = false
	I1206 19:13:38.867736   86706 command_runner.go:130] > #  allowed_annotations = []
	I1206 19:13:38.867746   86706 command_runner.go:130] > # Where:
	I1206 19:13:38.867758   86706 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1206 19:13:38.867769   86706 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1206 19:13:38.867784   86706 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1206 19:13:38.867798   86706 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1206 19:13:38.867813   86706 command_runner.go:130] > #   in $PATH.
	I1206 19:13:38.867826   86706 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1206 19:13:38.867838   86706 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1206 19:13:38.867853   86706 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1206 19:13:38.867861   86706 command_runner.go:130] > #   state.
	I1206 19:13:38.867873   86706 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1206 19:13:38.867887   86706 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1206 19:13:38.867905   86706 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1206 19:13:38.867919   86706 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1206 19:13:38.867933   86706 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1206 19:13:38.867948   86706 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1206 19:13:38.867960   86706 command_runner.go:130] > #   The currently recognized values are:
	I1206 19:13:38.867974   86706 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1206 19:13:38.867990   86706 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1206 19:13:38.868003   86706 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1206 19:13:38.868017   86706 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1206 19:13:38.868033   86706 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1206 19:13:38.868047   86706 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1206 19:13:38.868061   86706 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1206 19:13:38.868077   86706 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1206 19:13:38.868089   86706 command_runner.go:130] > #   should be moved to the container's cgroup
	I1206 19:13:38.868100   86706 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1206 19:13:38.868108   86706 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1206 19:13:38.868119   86706 command_runner.go:130] > runtime_type = "oci"
	I1206 19:13:38.868130   86706 command_runner.go:130] > runtime_root = "/run/runc"
	I1206 19:13:38.868144   86706 command_runner.go:130] > runtime_config_path = ""
	I1206 19:13:38.868155   86706 command_runner.go:130] > monitor_path = ""
	I1206 19:13:38.868163   86706 command_runner.go:130] > monitor_cgroup = ""
	I1206 19:13:38.868174   86706 command_runner.go:130] > monitor_exec_cgroup = ""
	I1206 19:13:38.868187   86706 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1206 19:13:38.868196   86706 command_runner.go:130] > # running containers
	I1206 19:13:38.868207   86706 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1206 19:13:38.868219   86706 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1206 19:13:38.868280   86706 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1206 19:13:38.868293   86706 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1206 19:13:38.868301   86706 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1206 19:13:38.868309   86706 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1206 19:13:38.868316   86706 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1206 19:13:38.868325   86706 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1206 19:13:38.868336   86706 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1206 19:13:38.868349   86706 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1206 19:13:38.868364   86706 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1206 19:13:38.868376   86706 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1206 19:13:38.868395   86706 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1206 19:13:38.868411   86706 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1206 19:13:38.868427   86706 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1206 19:13:38.868441   86706 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1206 19:13:38.868460   86706 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1206 19:13:38.868477   86706 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1206 19:13:38.868490   86706 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1206 19:13:38.868506   86706 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1206 19:13:38.868515   86706 command_runner.go:130] > # Example:
	I1206 19:13:38.868524   86706 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1206 19:13:38.868536   86706 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1206 19:13:38.868548   86706 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1206 19:13:38.868564   86706 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1206 19:13:38.868574   86706 command_runner.go:130] > # cpuset = 0
	I1206 19:13:38.868585   86706 command_runner.go:130] > # cpushares = "0-1"
	I1206 19:13:38.868593   86706 command_runner.go:130] > # Where:
	I1206 19:13:38.868604   86706 command_runner.go:130] > # The workload name is workload-type.
	I1206 19:13:38.868617   86706 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1206 19:13:38.868633   86706 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1206 19:13:38.868647   86706 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1206 19:13:38.868664   86706 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1206 19:13:38.868677   86706 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1206 19:13:38.868686   86706 command_runner.go:130] > # 
	I1206 19:13:38.868697   86706 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1206 19:13:38.868706   86706 command_runner.go:130] > #
	I1206 19:13:38.868717   86706 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1206 19:13:38.868731   86706 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1206 19:13:38.868745   86706 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1206 19:13:38.868764   86706 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1206 19:13:38.868777   86706 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1206 19:13:38.868785   86706 command_runner.go:130] > [crio.image]
	I1206 19:13:38.868798   86706 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1206 19:13:38.868814   86706 command_runner.go:130] > # default_transport = "docker://"
	I1206 19:13:38.868829   86706 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1206 19:13:38.868844   86706 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1206 19:13:38.868855   86706 command_runner.go:130] > # global_auth_file = ""
	I1206 19:13:38.868871   86706 command_runner.go:130] > # The image used to instantiate infra containers.
	I1206 19:13:38.868884   86706 command_runner.go:130] > # This option supports live configuration reload.
	I1206 19:13:38.868896   86706 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1206 19:13:38.868912   86706 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1206 19:13:38.868926   86706 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1206 19:13:38.868937   86706 command_runner.go:130] > # This option supports live configuration reload.
	I1206 19:13:38.868946   86706 command_runner.go:130] > # pause_image_auth_file = ""
	I1206 19:13:38.868960   86706 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1206 19:13:38.868974   86706 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1206 19:13:38.868985   86706 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1206 19:13:38.868993   86706 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1206 19:13:38.869000   86706 command_runner.go:130] > # pause_command = "/pause"
	I1206 19:13:38.869009   86706 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1206 19:13:38.869022   86706 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1206 19:13:38.869032   86706 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1206 19:13:38.869044   86706 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1206 19:13:38.869053   86706 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1206 19:13:38.869060   86706 command_runner.go:130] > # signature_policy = ""
	I1206 19:13:38.869074   86706 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1206 19:13:38.869089   86706 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1206 19:13:38.869099   86706 command_runner.go:130] > # changing them here.
	I1206 19:13:38.869107   86706 command_runner.go:130] > # insecure_registries = [
	I1206 19:13:38.869116   86706 command_runner.go:130] > # ]
	I1206 19:13:38.869132   86706 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1206 19:13:38.869145   86706 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1206 19:13:38.869156   86706 command_runner.go:130] > # image_volumes = "mkdir"
	I1206 19:13:38.869167   86706 command_runner.go:130] > # Temporary directory to use for storing big files
	I1206 19:13:38.869178   86706 command_runner.go:130] > # big_files_temporary_dir = ""
	I1206 19:13:38.869192   86706 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1206 19:13:38.869200   86706 command_runner.go:130] > # CNI plugins.
	I1206 19:13:38.869210   86706 command_runner.go:130] > [crio.network]
	I1206 19:13:38.869226   86706 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1206 19:13:38.869248   86706 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1206 19:13:38.869260   86706 command_runner.go:130] > # cni_default_network = ""
	I1206 19:13:38.869271   86706 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1206 19:13:38.869283   86706 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1206 19:13:38.869302   86706 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1206 19:13:38.869313   86706 command_runner.go:130] > # plugin_dirs = [
	I1206 19:13:38.869323   86706 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1206 19:13:38.869329   86706 command_runner.go:130] > # ]
	I1206 19:13:38.869344   86706 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1206 19:13:38.869354   86706 command_runner.go:130] > [crio.metrics]
	I1206 19:13:38.869366   86706 command_runner.go:130] > # Globally enable or disable metrics support.
	I1206 19:13:38.869377   86706 command_runner.go:130] > enable_metrics = true
	I1206 19:13:38.869387   86706 command_runner.go:130] > # Specify enabled metrics collectors.
	I1206 19:13:38.869399   86706 command_runner.go:130] > # Per default all metrics are enabled.
	I1206 19:13:38.869413   86706 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1206 19:13:38.869426   86706 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1206 19:13:38.869440   86706 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1206 19:13:38.869451   86706 command_runner.go:130] > # metrics_collectors = [
	I1206 19:13:38.869460   86706 command_runner.go:130] > # 	"operations",
	I1206 19:13:38.869470   86706 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1206 19:13:38.869481   86706 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1206 19:13:38.869492   86706 command_runner.go:130] > # 	"operations_errors",
	I1206 19:13:38.869506   86706 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1206 19:13:38.869518   86706 command_runner.go:130] > # 	"image_pulls_by_name",
	I1206 19:13:38.869530   86706 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1206 19:13:38.869541   86706 command_runner.go:130] > # 	"image_pulls_failures",
	I1206 19:13:38.869550   86706 command_runner.go:130] > # 	"image_pulls_successes",
	I1206 19:13:38.869563   86706 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1206 19:13:38.869572   86706 command_runner.go:130] > # 	"image_layer_reuse",
	I1206 19:13:38.869580   86706 command_runner.go:130] > # 	"containers_oom_total",
	I1206 19:13:38.869591   86706 command_runner.go:130] > # 	"containers_oom",
	I1206 19:13:38.869602   86706 command_runner.go:130] > # 	"processes_defunct",
	I1206 19:13:38.869616   86706 command_runner.go:130] > # 	"operations_total",
	I1206 19:13:38.869638   86706 command_runner.go:130] > # 	"operations_latency_seconds",
	I1206 19:13:38.869651   86706 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1206 19:13:38.869661   86706 command_runner.go:130] > # 	"operations_errors_total",
	I1206 19:13:38.869669   86706 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1206 19:13:38.869681   86706 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1206 19:13:38.869692   86706 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1206 19:13:38.869703   86706 command_runner.go:130] > # 	"image_pulls_success_total",
	I1206 19:13:38.869718   86706 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1206 19:13:38.869729   86706 command_runner.go:130] > # 	"containers_oom_count_total",
	I1206 19:13:38.869736   86706 command_runner.go:130] > # ]
	I1206 19:13:38.869748   86706 command_runner.go:130] > # The port on which the metrics server will listen.
	I1206 19:13:38.869756   86706 command_runner.go:130] > # metrics_port = 9090
	I1206 19:13:38.869769   86706 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1206 19:13:38.869780   86706 command_runner.go:130] > # metrics_socket = ""
	I1206 19:13:38.869792   86706 command_runner.go:130] > # The certificate for the secure metrics server.
	I1206 19:13:38.869811   86706 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1206 19:13:38.869825   86706 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1206 19:13:38.869837   86706 command_runner.go:130] > # certificate on any modification event.
	I1206 19:13:38.869845   86706 command_runner.go:130] > # metrics_cert = ""
	I1206 19:13:38.869858   86706 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1206 19:13:38.869872   86706 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1206 19:13:38.869882   86706 command_runner.go:130] > # metrics_key = ""
	I1206 19:13:38.869896   86706 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1206 19:13:38.869906   86706 command_runner.go:130] > [crio.tracing]
	I1206 19:13:38.869919   86706 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1206 19:13:38.869933   86706 command_runner.go:130] > # enable_tracing = false
	I1206 19:13:38.869946   86706 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1206 19:13:38.869958   86706 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1206 19:13:38.869971   86706 command_runner.go:130] > # Number of samples to collect per million spans.
	I1206 19:13:38.869982   86706 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1206 19:13:38.869996   86706 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1206 19:13:38.870006   86706 command_runner.go:130] > [crio.stats]
	I1206 19:13:38.870020   86706 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1206 19:13:38.870033   86706 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1206 19:13:38.870044   86706 command_runner.go:130] > # stats_collection_period = 0
	I1206 19:13:38.870138   86706 cni.go:84] Creating CNI manager for ""
	I1206 19:13:38.870152   86706 cni.go:136] 3 nodes found, recommending kindnet
	I1206 19:13:38.870177   86706 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1206 19:13:38.870207   86706 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.125 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-593099 NodeName:multinode-593099 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.125"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.125 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 19:13:38.870422   86706 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.125
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-593099"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.125
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.125"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 19:13:38.870517   86706 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-593099 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.125
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-593099 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1206 19:13:38.870589   86706 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1206 19:13:38.880654   86706 command_runner.go:130] > kubeadm
	I1206 19:13:38.880674   86706 command_runner.go:130] > kubectl
	I1206 19:13:38.880680   86706 command_runner.go:130] > kubelet
	I1206 19:13:38.880754   86706 binaries.go:44] Found k8s binaries, skipping transfer
	I1206 19:13:38.880874   86706 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 19:13:38.890229   86706 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I1206 19:13:38.906296   86706 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 19:13:38.921877   86706 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I1206 19:13:38.938075   86706 ssh_runner.go:195] Run: grep 192.168.39.125	control-plane.minikube.internal$ /etc/hosts
	I1206 19:13:38.941665   86706 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.125	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 19:13:38.956198   86706 certs.go:56] Setting up /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099 for IP: 192.168.39.125
	I1206 19:13:38.956251   86706 certs.go:190] acquiring lock for shared ca certs: {Name:mkf8fbf7b590617ef4dc6c3a4acb742ae26f89ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 19:13:38.956492   86706 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.key
	I1206 19:13:38.956559   86706 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.key
	I1206 19:13:38.956697   86706 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/client.key
	I1206 19:13:38.956790   86706 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/apiserver.key.657bd91f
	I1206 19:13:38.956868   86706 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/proxy-client.key
	I1206 19:13:38.956885   86706 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1206 19:13:38.956907   86706 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1206 19:13:38.956926   86706 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1206 19:13:38.956948   86706 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1206 19:13:38.956966   86706 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1206 19:13:38.956984   86706 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1206 19:13:38.957001   86706 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1206 19:13:38.957022   86706 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1206 19:13:38.957111   86706 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834.pem (1338 bytes)
	W1206 19:13:38.957171   86706 certs.go:433] ignoring /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834_empty.pem, impossibly tiny 0 bytes
	I1206 19:13:38.957192   86706 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 19:13:38.957249   86706 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem (1082 bytes)
	I1206 19:13:38.957293   86706 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem (1123 bytes)
	I1206 19:13:38.957333   86706 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem (1679 bytes)
	I1206 19:13:38.957397   86706 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem (1708 bytes)
	I1206 19:13:38.957436   86706 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834.pem -> /usr/share/ca-certificates/70834.pem
	I1206 19:13:38.957457   86706 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem -> /usr/share/ca-certificates/708342.pem
	I1206 19:13:38.957474   86706 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:13:38.958395   86706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1206 19:13:38.980677   86706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 19:13:39.002391   86706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 19:13:39.024038   86706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 19:13:39.047740   86706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 19:13:39.071898   86706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 19:13:39.096818   86706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 19:13:39.121507   86706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 19:13:39.145818   86706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834.pem --> /usr/share/ca-certificates/70834.pem (1338 bytes)
	I1206 19:13:39.169200   86706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem --> /usr/share/ca-certificates/708342.pem (1708 bytes)
	I1206 19:13:39.192669   86706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 19:13:39.215622   86706 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 19:13:39.232174   86706 ssh_runner.go:195] Run: openssl version
	I1206 19:13:39.237426   86706 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1206 19:13:39.237711   86706 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1206 19:13:39.247367   86706 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:13:39.251877   86706 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  6 18:41 /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:13:39.252107   86706 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  6 18:41 /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:13:39.252163   86706 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:13:39.257928   86706 command_runner.go:130] > b5213941
	I1206 19:13:39.258001   86706 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1206 19:13:39.268132   86706 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/70834.pem && ln -fs /usr/share/ca-certificates/70834.pem /etc/ssl/certs/70834.pem"
	I1206 19:13:39.277696   86706 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/70834.pem
	I1206 19:13:39.282232   86706 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  6 18:50 /usr/share/ca-certificates/70834.pem
	I1206 19:13:39.282399   86706 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  6 18:50 /usr/share/ca-certificates/70834.pem
	I1206 19:13:39.282469   86706 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/70834.pem
	I1206 19:13:39.287830   86706 command_runner.go:130] > 51391683
	I1206 19:13:39.287916   86706 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/70834.pem /etc/ssl/certs/51391683.0"
	I1206 19:13:39.297121   86706 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/708342.pem && ln -fs /usr/share/ca-certificates/708342.pem /etc/ssl/certs/708342.pem"
	I1206 19:13:39.306880   86706 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/708342.pem
	I1206 19:13:39.311372   86706 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  6 18:50 /usr/share/ca-certificates/708342.pem
	I1206 19:13:39.311454   86706 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  6 18:50 /usr/share/ca-certificates/708342.pem
	I1206 19:13:39.311542   86706 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/708342.pem
	I1206 19:13:39.317787   86706 command_runner.go:130] > 3ec20f2e
	I1206 19:13:39.317853   86706 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/708342.pem /etc/ssl/certs/3ec20f2e.0"
	I1206 19:13:39.327516   86706 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1206 19:13:39.332186   86706 command_runner.go:130] > ca.crt
	I1206 19:13:39.332206   86706 command_runner.go:130] > ca.key
	I1206 19:13:39.332211   86706 command_runner.go:130] > healthcheck-client.crt
	I1206 19:13:39.332215   86706 command_runner.go:130] > healthcheck-client.key
	I1206 19:13:39.332220   86706 command_runner.go:130] > peer.crt
	I1206 19:13:39.332225   86706 command_runner.go:130] > peer.key
	I1206 19:13:39.332231   86706 command_runner.go:130] > server.crt
	I1206 19:13:39.332237   86706 command_runner.go:130] > server.key
	I1206 19:13:39.332304   86706 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 19:13:39.338389   86706 command_runner.go:130] > Certificate will not expire
	I1206 19:13:39.338700   86706 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 19:13:39.344652   86706 command_runner.go:130] > Certificate will not expire
	I1206 19:13:39.344705   86706 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 19:13:39.350406   86706 command_runner.go:130] > Certificate will not expire
	I1206 19:13:39.350825   86706 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 19:13:39.356442   86706 command_runner.go:130] > Certificate will not expire
	I1206 19:13:39.356859   86706 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 19:13:39.362738   86706 command_runner.go:130] > Certificate will not expire
	I1206 19:13:39.362804   86706 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 19:13:39.368170   86706 command_runner.go:130] > Certificate will not expire
	I1206 19:13:39.368399   86706 kubeadm.go:404] StartCluster: {Name:multinode-593099 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-593099 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.6 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.194 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 19:13:39.368579   86706 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 19:13:39.368663   86706 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 19:13:39.408438   86706 cri.go:89] found id: ""
	I1206 19:13:39.408537   86706 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 19:13:39.418194   86706 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1206 19:13:39.418215   86706 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1206 19:13:39.418224   86706 command_runner.go:130] > /var/lib/minikube/etcd:
	I1206 19:13:39.418228   86706 command_runner.go:130] > member
	I1206 19:13:39.418294   86706 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1206 19:13:39.418343   86706 kubeadm.go:636] restartCluster start
	I1206 19:13:39.418402   86706 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 19:13:39.426928   86706 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:13:39.427493   86706 kubeconfig.go:92] found "multinode-593099" server: "https://192.168.39.125:8443"
	I1206 19:13:39.427924   86706 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17740-63652/kubeconfig
	I1206 19:13:39.428172   86706 kapi.go:59] client config for multinode-593099: &rest.Config{Host:"https://192.168.39.125:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/client.crt", KeyFile:"/home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/client.key", CAFile:"/home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 19:13:39.428827   86706 cert_rotation.go:137] Starting client certificate rotation controller
	I1206 19:13:39.429132   86706 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 19:13:39.437353   86706 api_server.go:166] Checking apiserver status ...
	I1206 19:13:39.437405   86706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:13:39.447986   86706 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:13:39.448002   86706 api_server.go:166] Checking apiserver status ...
	I1206 19:13:39.448035   86706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:13:39.457936   86706 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:13:39.958983   86706 api_server.go:166] Checking apiserver status ...
	I1206 19:13:39.959095   86706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:13:39.971065   86706 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:13:40.458686   86706 api_server.go:166] Checking apiserver status ...
	I1206 19:13:40.458768   86706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:13:40.470104   86706 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:13:40.958389   86706 api_server.go:166] Checking apiserver status ...
	I1206 19:13:40.958489   86706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:13:40.970052   86706 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:13:41.458594   86706 api_server.go:166] Checking apiserver status ...
	I1206 19:13:41.458683   86706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:13:41.470175   86706 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:13:41.958127   86706 api_server.go:166] Checking apiserver status ...
	I1206 19:13:41.958230   86706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:13:41.969252   86706 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:13:42.458386   86706 api_server.go:166] Checking apiserver status ...
	I1206 19:13:42.458492   86706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:13:42.469568   86706 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:13:42.958042   86706 api_server.go:166] Checking apiserver status ...
	I1206 19:13:42.958133   86706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:13:42.969217   86706 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:13:43.458795   86706 api_server.go:166] Checking apiserver status ...
	I1206 19:13:43.458913   86706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:13:43.470245   86706 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:13:43.958920   86706 api_server.go:166] Checking apiserver status ...
	I1206 19:13:43.959039   86706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:13:43.970399   86706 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:13:44.458045   86706 api_server.go:166] Checking apiserver status ...
	I1206 19:13:44.458131   86706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:13:44.469573   86706 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:13:44.958643   86706 api_server.go:166] Checking apiserver status ...
	I1206 19:13:44.958738   86706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:13:44.970805   86706 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:13:45.458143   86706 api_server.go:166] Checking apiserver status ...
	I1206 19:13:45.458221   86706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:13:45.469648   86706 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:13:45.958164   86706 api_server.go:166] Checking apiserver status ...
	I1206 19:13:45.958256   86706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:13:45.970736   86706 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:13:46.458276   86706 api_server.go:166] Checking apiserver status ...
	I1206 19:13:46.458352   86706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:13:46.469820   86706 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:13:46.958525   86706 api_server.go:166] Checking apiserver status ...
	I1206 19:13:46.958602   86706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:13:46.970931   86706 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:13:47.458487   86706 api_server.go:166] Checking apiserver status ...
	I1206 19:13:47.458596   86706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:13:47.469607   86706 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:13:47.958163   86706 api_server.go:166] Checking apiserver status ...
	I1206 19:13:47.958258   86706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:13:47.970206   86706 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:13:48.458833   86706 api_server.go:166] Checking apiserver status ...
	I1206 19:13:48.458963   86706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:13:48.470243   86706 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:13:48.958901   86706 api_server.go:166] Checking apiserver status ...
	I1206 19:13:48.959009   86706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:13:48.970725   86706 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:13:49.437448   86706 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1206 19:13:49.437481   86706 kubeadm.go:1135] stopping kube-system containers ...
	I1206 19:13:49.437493   86706 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1206 19:13:49.437550   86706 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 19:13:49.474234   86706 cri.go:89] found id: ""
	I1206 19:13:49.474311   86706 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1206 19:13:49.489056   86706 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 19:13:49.497608   86706 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1206 19:13:49.497660   86706 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1206 19:13:49.497816   86706 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1206 19:13:49.497854   86706 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 19:13:49.497920   86706 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 19:13:49.497982   86706 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 19:13:49.506332   86706 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1206 19:13:49.506383   86706 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:13:49.615731   86706 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 19:13:49.615762   86706 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1206 19:13:49.615772   86706 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1206 19:13:49.615783   86706 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 19:13:49.615793   86706 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I1206 19:13:49.615802   86706 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I1206 19:13:49.615811   86706 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I1206 19:13:49.615821   86706 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I1206 19:13:49.615849   86706 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I1206 19:13:49.615862   86706 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 19:13:49.615878   86706 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 19:13:49.615888   86706 command_runner.go:130] > [certs] Using the existing "sa" key
	I1206 19:13:49.615922   86706 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:13:49.667744   86706 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 19:13:49.800275   86706 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 19:13:49.899031   86706 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 19:13:49.968762   86706 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 19:13:50.094533   86706 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 19:13:50.097035   86706 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:13:50.166800   86706 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 19:13:50.167912   86706 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 19:13:50.168020   86706 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1206 19:13:50.295545   86706 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:13:50.418509   86706 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 19:13:50.418535   86706 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 19:13:50.418541   86706 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 19:13:50.418548   86706 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 19:13:50.418579   86706 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:13:50.501334   86706 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 19:13:50.503948   86706 api_server.go:52] waiting for apiserver process to appear ...
	I1206 19:13:50.504012   86706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:13:50.523647   86706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:13:51.045961   86706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:13:51.546264   86706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:13:52.046139   86706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:13:52.546032   86706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:13:52.568955   86706 command_runner.go:130] > 1067
	I1206 19:13:52.569246   86706 api_server.go:72] duration metric: took 2.065290844s to wait for apiserver process to appear ...
	I1206 19:13:52.569271   86706 api_server.go:88] waiting for apiserver healthz status ...
	I1206 19:13:52.569291   86706 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I1206 19:13:56.491270   86706 api_server.go:279] https://192.168.39.125:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1206 19:13:56.491301   86706 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1206 19:13:56.491317   86706 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I1206 19:13:56.516718   86706 api_server.go:279] https://192.168.39.125:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1206 19:13:56.516747   86706 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1206 19:13:57.017311   86706 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I1206 19:13:57.022486   86706 api_server.go:279] https://192.168.39.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1206 19:13:57.022519   86706 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1206 19:13:57.517058   86706 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I1206 19:13:57.522527   86706 api_server.go:279] https://192.168.39.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1206 19:13:57.522556   86706 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1206 19:13:58.017207   86706 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I1206 19:13:58.025894   86706 api_server.go:279] https://192.168.39.125:8443/healthz returned 200:
	ok
	I1206 19:13:58.025982   86706 round_trippers.go:463] GET https://192.168.39.125:8443/version
	I1206 19:13:58.025989   86706 round_trippers.go:469] Request Headers:
	I1206 19:13:58.025997   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:13:58.026006   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:13:58.037135   86706 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1206 19:13:58.037168   86706 round_trippers.go:577] Response Headers:
	I1206 19:13:58.037178   86706 round_trippers.go:580]     Audit-Id: 309383e4-9c08-4531-9d72-2d662064ca4f
	I1206 19:13:58.037190   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:13:58.037199   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:13:58.037207   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:13:58.037215   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:13:58.037222   86706 round_trippers.go:580]     Content-Length: 264
	I1206 19:13:58.037246   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:13:58 GMT
	I1206 19:13:58.037287   86706 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1206 19:13:58.037415   86706 api_server.go:141] control plane version: v1.28.4
	I1206 19:13:58.037440   86706 api_server.go:131] duration metric: took 5.468161335s to wait for apiserver health ...
	I1206 19:13:58.037458   86706 cni.go:84] Creating CNI manager for ""
	I1206 19:13:58.037469   86706 cni.go:136] 3 nodes found, recommending kindnet
	I1206 19:13:58.039379   86706 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1206 19:13:58.040740   86706 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1206 19:13:58.046508   86706 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1206 19:13:58.046546   86706 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1206 19:13:58.046558   86706 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1206 19:13:58.046568   86706 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1206 19:13:58.046578   86706 command_runner.go:130] > Access: 2023-12-06 19:13:22.670512873 +0000
	I1206 19:13:58.046588   86706 command_runner.go:130] > Modify: 2023-12-01 05:15:19.000000000 +0000
	I1206 19:13:58.046596   86706 command_runner.go:130] > Change: 2023-12-06 19:13:20.668512873 +0000
	I1206 19:13:58.046605   86706 command_runner.go:130] >  Birth: -
	I1206 19:13:58.046682   86706 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1206 19:13:58.046699   86706 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1206 19:13:58.065739   86706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1206 19:13:59.039879   86706 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1206 19:13:59.039905   86706 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1206 19:13:59.039915   86706 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1206 19:13:59.039929   86706 command_runner.go:130] > daemonset.apps/kindnet configured
	I1206 19:13:59.039956   86706 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 19:13:59.040072   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods
	I1206 19:13:59.040084   86706 round_trippers.go:469] Request Headers:
	I1206 19:13:59.040097   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:13:59.040106   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:13:59.047254   86706 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1206 19:13:59.047290   86706 round_trippers.go:577] Response Headers:
	I1206 19:13:59.047302   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:13:59.047310   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:13:59.047318   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:13:59.047326   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:13:59.047333   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:13:59 GMT
	I1206 19:13:59.047339   86706 round_trippers.go:580]     Audit-Id: e9a9f810-0883-4854-9559-df25bb8fff5b
	I1206 19:13:59.049035   86706 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"796"},"items":[{"metadata":{"name":"coredns-5dd5756b68-h6rcq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"85247dde-4cee-482e-8f9b-a9e8f4e7172e","resourceVersion":"767","creationTimestamp":"2023-12-06T19:03:43Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d4bc00ef-7482-4e80-b416-7475ddc04c5d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4bc00ef-7482-4e80-b416-7475ddc04c5d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 83700 chars]
	I1206 19:13:59.053023   86706 system_pods.go:59] 12 kube-system pods found
	I1206 19:13:59.053060   86706 system_pods.go:61] "coredns-5dd5756b68-h6rcq" [85247dde-4cee-482e-8f9b-a9e8f4e7172e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 19:13:59.053068   86706 system_pods.go:61] "etcd-multinode-593099" [17573829-76f1-4718-80d6-248db178e8d0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1206 19:13:59.053081   86706 system_pods.go:61] "kindnet-2s5b8" [da77f62f-091e-45f0-b6a6-0bc04b1c1f5d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1206 19:13:59.053087   86706 system_pods.go:61] "kindnet-mbkkj" [e67fa795-ace6-4463-b0be-493b26fec4e6] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1206 19:13:59.053095   86706 system_pods.go:61] "kindnet-x2r64" [1dafec99-c18b-40ca-8b9d-b5d520390c8c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1206 19:13:59.053105   86706 system_pods.go:61] "kube-apiserver-multinode-593099" [c32eea84-5395-4ffd-9fe4-51ae29b0861c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1206 19:13:59.053110   86706 system_pods.go:61] "kube-controller-manager-multinode-593099" [bd10545f-240d-418a-b4ca-a48c978a56c9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 19:13:59.053118   86706 system_pods.go:61] "kube-proxy-ggxmb" [9967a10f-783d-4e8f-bb49-f609c7227307] Running
	I1206 19:13:59.053123   86706 system_pods.go:61] "kube-proxy-thqkt" [0012fda4-56e7-4054-ab90-1704569e66e8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1206 19:13:59.053127   86706 system_pods.go:61] "kube-proxy-tp2wm" [366b51c9-af8f-4bd5-8200-dc43c4a3017c] Running
	I1206 19:13:59.053132   86706 system_pods.go:61] "kube-scheduler-multinode-593099" [7ae8a659-33ba-4e2b-9211-8d84efe7e5a4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1206 19:13:59.053137   86706 system_pods.go:61] "storage-provisioner" [35974b37-5aff-4940-8e2d-5fec9d1e2166] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 19:13:59.053151   86706 system_pods.go:74] duration metric: took 13.1861ms to wait for pod list to return data ...
	I1206 19:13:59.053162   86706 node_conditions.go:102] verifying NodePressure condition ...
	I1206 19:13:59.053216   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes
	I1206 19:13:59.053223   86706 round_trippers.go:469] Request Headers:
	I1206 19:13:59.053250   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:13:59.053263   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:13:59.055996   86706 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:13:59.056013   86706 round_trippers.go:577] Response Headers:
	I1206 19:13:59.056023   86706 round_trippers.go:580]     Audit-Id: 08a594ae-999d-4b24-b041-eece28986b67
	I1206 19:13:59.056032   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:13:59.056041   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:13:59.056048   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:13:59.056054   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:13:59.056062   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:13:59 GMT
	I1206 19:13:59.056462   86706 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"796"},"items":[{"metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"705","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 16354 chars]
	I1206 19:13:59.057309   86706 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1206 19:13:59.057347   86706 node_conditions.go:123] node cpu capacity is 2
	I1206 19:13:59.057359   86706 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1206 19:13:59.057367   86706 node_conditions.go:123] node cpu capacity is 2
	I1206 19:13:59.057373   86706 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1206 19:13:59.057388   86706 node_conditions.go:123] node cpu capacity is 2
	I1206 19:13:59.057394   86706 node_conditions.go:105] duration metric: took 4.227591ms to run NodePressure ...
	I1206 19:13:59.057422   86706 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:13:59.346657   86706 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1206 19:13:59.460080   86706 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1206 19:13:59.460155   86706 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1206 19:13:59.460257   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I1206 19:13:59.460267   86706 round_trippers.go:469] Request Headers:
	I1206 19:13:59.460275   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:13:59.460281   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:13:59.465171   86706 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1206 19:13:59.465203   86706 round_trippers.go:577] Response Headers:
	I1206 19:13:59.465215   86706 round_trippers.go:580]     Audit-Id: 5666c9b6-635c-4039-9678-c27bd7e4cc12
	I1206 19:13:59.465224   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:13:59.465258   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:13:59.465270   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:13:59.465279   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:13:59.465297   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:13:59 GMT
	I1206 19:13:59.466941   86706 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"803"},"items":[{"metadata":{"name":"etcd-multinode-593099","namespace":"kube-system","uid":"17573829-76f1-4718-80d6-248db178e8d0","resourceVersion":"765","creationTimestamp":"2023-12-06T19:03:29Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.125:2379","kubernetes.io/config.hash":"9ce14df981100c86a2ade94d91a33196","kubernetes.io/config.mirror":"9ce14df981100c86a2ade94d91a33196","kubernetes.io/config.seen":"2023-12-06T19:03:21.456077539Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:k [truncated 28886 chars]
	I1206 19:13:59.467925   86706 kubeadm.go:787] kubelet initialised
	I1206 19:13:59.467943   86706 kubeadm.go:788] duration metric: took 7.778554ms waiting for restarted kubelet to initialise ...
	I1206 19:13:59.467950   86706 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 19:13:59.468011   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods
	I1206 19:13:59.468019   86706 round_trippers.go:469] Request Headers:
	I1206 19:13:59.468027   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:13:59.468033   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:13:59.472077   86706 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1206 19:13:59.472115   86706 round_trippers.go:577] Response Headers:
	I1206 19:13:59.472125   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:13:59 GMT
	I1206 19:13:59.472134   86706 round_trippers.go:580]     Audit-Id: d935faf1-3859-4ed3-a40f-e610491406fb
	I1206 19:13:59.472143   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:13:59.472150   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:13:59.472162   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:13:59.472174   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:13:59.473150   86706 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"803"},"items":[{"metadata":{"name":"coredns-5dd5756b68-h6rcq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"85247dde-4cee-482e-8f9b-a9e8f4e7172e","resourceVersion":"767","creationTimestamp":"2023-12-06T19:03:43Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d4bc00ef-7482-4e80-b416-7475ddc04c5d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4bc00ef-7482-4e80-b416-7475ddc04c5d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 83700 chars]
	I1206 19:13:59.475706   86706 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-h6rcq" in "kube-system" namespace to be "Ready" ...
	I1206 19:13:59.475817   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h6rcq
	I1206 19:13:59.475835   86706 round_trippers.go:469] Request Headers:
	I1206 19:13:59.475847   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:13:59.475861   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:13:59.478778   86706 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:13:59.478794   86706 round_trippers.go:577] Response Headers:
	I1206 19:13:59.478801   86706 round_trippers.go:580]     Audit-Id: 6d0a0aa9-962f-46a5-a02b-b9a48e312570
	I1206 19:13:59.478806   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:13:59.478811   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:13:59.478816   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:13:59.478821   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:13:59.478826   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:13:59 GMT
	I1206 19:13:59.479722   86706 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-h6rcq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"85247dde-4cee-482e-8f9b-a9e8f4e7172e","resourceVersion":"767","creationTimestamp":"2023-12-06T19:03:43Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d4bc00ef-7482-4e80-b416-7475ddc04c5d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4bc00ef-7482-4e80-b416-7475ddc04c5d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1206 19:13:59.480186   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:13:59.480203   86706 round_trippers.go:469] Request Headers:
	I1206 19:13:59.480210   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:13:59.480217   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:13:59.484220   86706 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1206 19:13:59.484239   86706 round_trippers.go:577] Response Headers:
	I1206 19:13:59.484249   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:13:59 GMT
	I1206 19:13:59.484256   86706 round_trippers.go:580]     Audit-Id: 733aeb50-ce0a-42b4-9471-ef230d52b3af
	I1206 19:13:59.484263   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:13:59.484270   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:13:59.484277   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:13:59.484285   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:13:59.484817   86706 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"705","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1206 19:13:59.485148   86706 pod_ready.go:97] node "multinode-593099" hosting pod "coredns-5dd5756b68-h6rcq" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-593099" has status "Ready":"False"
	I1206 19:13:59.485172   86706 pod_ready.go:81] duration metric: took 9.442471ms waiting for pod "coredns-5dd5756b68-h6rcq" in "kube-system" namespace to be "Ready" ...
	E1206 19:13:59.485209   86706 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-593099" hosting pod "coredns-5dd5756b68-h6rcq" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-593099" has status "Ready":"False"
	I1206 19:13:59.485224   86706 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-593099" in "kube-system" namespace to be "Ready" ...
	I1206 19:13:59.485305   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-593099
	I1206 19:13:59.485316   86706 round_trippers.go:469] Request Headers:
	I1206 19:13:59.485327   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:13:59.485337   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:13:59.487287   86706 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1206 19:13:59.487314   86706 round_trippers.go:577] Response Headers:
	I1206 19:13:59.487323   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:13:59.487331   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:13:59.487340   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:13:59.487348   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:13:59.487365   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:13:59 GMT
	I1206 19:13:59.487371   86706 round_trippers.go:580]     Audit-Id: 50c0d276-b92b-44c8-b595-0c5d3eb56e19
	I1206 19:13:59.487554   86706 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-593099","namespace":"kube-system","uid":"17573829-76f1-4718-80d6-248db178e8d0","resourceVersion":"765","creationTimestamp":"2023-12-06T19:03:29Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.125:2379","kubernetes.io/config.hash":"9ce14df981100c86a2ade94d91a33196","kubernetes.io/config.mirror":"9ce14df981100c86a2ade94d91a33196","kubernetes.io/config.seen":"2023-12-06T19:03:21.456077539Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I1206 19:13:59.487947   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:13:59.487964   86706 round_trippers.go:469] Request Headers:
	I1206 19:13:59.487970   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:13:59.487976   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:13:59.490067   86706 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:13:59.490086   86706 round_trippers.go:577] Response Headers:
	I1206 19:13:59.490096   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:13:59 GMT
	I1206 19:13:59.490104   86706 round_trippers.go:580]     Audit-Id: 0e57d0b8-394f-403f-8558-ce5c74cebe5b
	I1206 19:13:59.490112   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:13:59.490121   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:13:59.490131   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:13:59.490141   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:13:59.490359   86706 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"705","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1206 19:13:59.490725   86706 pod_ready.go:97] node "multinode-593099" hosting pod "etcd-multinode-593099" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-593099" has status "Ready":"False"
	I1206 19:13:59.490750   86706 pod_ready.go:81] duration metric: took 5.505031ms waiting for pod "etcd-multinode-593099" in "kube-system" namespace to be "Ready" ...
	E1206 19:13:59.490759   86706 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-593099" hosting pod "etcd-multinode-593099" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-593099" has status "Ready":"False"
	I1206 19:13:59.490771   86706 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-593099" in "kube-system" namespace to be "Ready" ...
	I1206 19:13:59.490827   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-593099
	I1206 19:13:59.490834   86706 round_trippers.go:469] Request Headers:
	I1206 19:13:59.490841   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:13:59.490846   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:13:59.492696   86706 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1206 19:13:59.492710   86706 round_trippers.go:577] Response Headers:
	I1206 19:13:59.492722   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:13:59.492733   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:13:59.492740   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:13:59.492747   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:13:59.492755   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:13:59 GMT
	I1206 19:13:59.492767   86706 round_trippers.go:580]     Audit-Id: f2e94f1a-76d8-408e-927a-c126aa840614
	I1206 19:13:59.493003   86706 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-593099","namespace":"kube-system","uid":"c32eea84-5395-4ffd-9fe4-51ae29b0861c","resourceVersion":"762","creationTimestamp":"2023-12-06T19:03:31Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.125:8443","kubernetes.io/config.hash":"6290493e5e32b3d1986ab88f381ba97f","kubernetes.io/config.mirror":"6290493e5e32b3d1986ab88f381ba97f","kubernetes.io/config.seen":"2023-12-06T19:03:30.652197401Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7633 chars]
	I1206 19:13:59.493421   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:13:59.493438   86706 round_trippers.go:469] Request Headers:
	I1206 19:13:59.493446   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:13:59.493451   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:13:59.495454   86706 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1206 19:13:59.495467   86706 round_trippers.go:577] Response Headers:
	I1206 19:13:59.495473   86706 round_trippers.go:580]     Audit-Id: bd89c45c-279d-42af-acb0-0d6b0252fde9
	I1206 19:13:59.495479   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:13:59.495484   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:13:59.495488   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:13:59.495493   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:13:59.495498   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:13:59 GMT
	I1206 19:13:59.495671   86706 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"705","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1206 19:13:59.496004   86706 pod_ready.go:97] node "multinode-593099" hosting pod "kube-apiserver-multinode-593099" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-593099" has status "Ready":"False"
	I1206 19:13:59.496024   86706 pod_ready.go:81] duration metric: took 5.244225ms waiting for pod "kube-apiserver-multinode-593099" in "kube-system" namespace to be "Ready" ...
	E1206 19:13:59.496033   86706 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-593099" hosting pod "kube-apiserver-multinode-593099" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-593099" has status "Ready":"False"
	I1206 19:13:59.496038   86706 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-593099" in "kube-system" namespace to be "Ready" ...
	I1206 19:13:59.496098   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-593099
	I1206 19:13:59.496108   86706 round_trippers.go:469] Request Headers:
	I1206 19:13:59.496115   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:13:59.496121   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:13:59.498689   86706 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:13:59.498704   86706 round_trippers.go:577] Response Headers:
	I1206 19:13:59.498716   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:13:59.498721   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:13:59.498726   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:13:59.498731   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:13:59.498736   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:13:59 GMT
	I1206 19:13:59.498741   86706 round_trippers.go:580]     Audit-Id: a78720b6-99c5-4e4d-9210-517bbfb7a5e0
	I1206 19:13:59.499256   86706 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-593099","namespace":"kube-system","uid":"bd10545f-240d-418a-b4ca-a48c978a56c9","resourceVersion":"768","creationTimestamp":"2023-12-06T19:03:31Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e0f1a77aff616164d10d488d27b08307","kubernetes.io/config.mirror":"e0f1a77aff616164d10d488d27b08307","kubernetes.io/config.seen":"2023-12-06T19:03:30.652198715Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7216 chars]
	I1206 19:13:59.499617   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:13:59.499629   86706 round_trippers.go:469] Request Headers:
	I1206 19:13:59.499636   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:13:59.499642   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:13:59.501911   86706 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:13:59.501926   86706 round_trippers.go:577] Response Headers:
	I1206 19:13:59.501932   86706 round_trippers.go:580]     Audit-Id: cfd11f78-b42c-4195-86bc-1bf0c443c974
	I1206 19:13:59.501937   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:13:59.501943   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:13:59.501948   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:13:59.501953   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:13:59.501962   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:13:59 GMT
	I1206 19:13:59.502145   86706 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"705","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1206 19:13:59.502532   86706 pod_ready.go:97] node "multinode-593099" hosting pod "kube-controller-manager-multinode-593099" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-593099" has status "Ready":"False"
	I1206 19:13:59.502554   86706 pod_ready.go:81] duration metric: took 6.507525ms waiting for pod "kube-controller-manager-multinode-593099" in "kube-system" namespace to be "Ready" ...
	E1206 19:13:59.502563   86706 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-593099" hosting pod "kube-controller-manager-multinode-593099" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-593099" has status "Ready":"False"
	I1206 19:13:59.502569   86706 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ggxmb" in "kube-system" namespace to be "Ready" ...
	I1206 19:13:59.660862   86706 request.go:629] Waited for 158.208622ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ggxmb
	I1206 19:13:59.660967   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ggxmb
	I1206 19:13:59.660979   86706 round_trippers.go:469] Request Headers:
	I1206 19:13:59.660995   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:13:59.661008   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:13:59.668698   86706 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1206 19:13:59.668728   86706 round_trippers.go:577] Response Headers:
	I1206 19:13:59.668739   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:13:59.668747   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:13:59.668755   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:13:59 GMT
	I1206 19:13:59.668762   86706 round_trippers.go:580]     Audit-Id: 01842cb8-92ee-4191-87bd-d3711359de35
	I1206 19:13:59.668771   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:13:59.668778   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:13:59.669224   86706 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ggxmb","generateName":"kube-proxy-","namespace":"kube-system","uid":"9967a10f-783d-4e8f-bb49-f609c7227307","resourceVersion":"470","creationTimestamp":"2023-12-06T19:04:27Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"9bd0b244-d31b-4ce9-a395-f0d7b9ee08be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:04:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9bd0b244-d31b-4ce9-a395-f0d7b9ee08be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I1206 19:13:59.861215   86706 request.go:629] Waited for 191.376054ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.125:8443/api/v1/nodes/multinode-593099-m02
	I1206 19:13:59.861329   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099-m02
	I1206 19:13:59.861338   86706 round_trippers.go:469] Request Headers:
	I1206 19:13:59.861347   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:13:59.861357   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:13:59.868292   86706 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1206 19:13:59.868319   86706 round_trippers.go:577] Response Headers:
	I1206 19:13:59.868327   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:13:59.868332   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:13:59 GMT
	I1206 19:13:59.868337   86706 round_trippers.go:580]     Audit-Id: e4c1ab64-25e6-4487-acf0-27b0a8f7a0ad
	I1206 19:13:59.868342   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:13:59.868347   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:13:59.868352   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:13:59.868519   86706 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099-m02","uid":"4f57a17b-3ee2-40b9-bc65-252760c4ac03","resourceVersion":"702","creationTimestamp":"2023-12-06T19:04:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_06T19_06_00_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:04:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 4234 chars]
	I1206 19:13:59.868802   86706 pod_ready.go:92] pod "kube-proxy-ggxmb" in "kube-system" namespace has status "Ready":"True"
	I1206 19:13:59.868817   86706 pod_ready.go:81] duration metric: took 366.242317ms waiting for pod "kube-proxy-ggxmb" in "kube-system" namespace to be "Ready" ...
	I1206 19:13:59.868826   86706 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-thqkt" in "kube-system" namespace to be "Ready" ...
	I1206 19:14:00.061319   86706 request.go:629] Waited for 192.425847ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/kube-proxy-thqkt
	I1206 19:14:00.061391   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/kube-proxy-thqkt
	I1206 19:14:00.061396   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:00.061404   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:00.061410   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:00.065400   86706 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1206 19:14:00.065424   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:00.065446   86706 round_trippers.go:580]     Audit-Id: 124a1a83-827b-46e9-bdca-be0476947c3c
	I1206 19:14:00.065452   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:00.065457   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:00.065465   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:00.065470   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:00.065477   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:00 GMT
	I1206 19:14:00.065874   86706 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-thqkt","generateName":"kube-proxy-","namespace":"kube-system","uid":"0012fda4-56e7-4054-ab90-1704569e66e8","resourceVersion":"809","creationTimestamp":"2023-12-06T19:03:43Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"9bd0b244-d31b-4ce9-a395-f0d7b9ee08be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9bd0b244-d31b-4ce9-a395-f0d7b9ee08be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I1206 19:14:00.260751   86706 request.go:629] Waited for 194.420221ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:14:00.260817   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:14:00.260823   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:00.260831   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:00.260837   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:00.263893   86706 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1206 19:14:00.263914   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:00.263920   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:00.263926   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:00 GMT
	I1206 19:14:00.263931   86706 round_trippers.go:580]     Audit-Id: fe28fcf8-5859-4a11-9bfe-81c054432af1
	I1206 19:14:00.263936   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:00.263941   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:00.263946   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:00.264123   86706 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"705","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1206 19:14:00.264510   86706 pod_ready.go:97] node "multinode-593099" hosting pod "kube-proxy-thqkt" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-593099" has status "Ready":"False"
	I1206 19:14:00.264530   86706 pod_ready.go:81] duration metric: took 395.699555ms waiting for pod "kube-proxy-thqkt" in "kube-system" namespace to be "Ready" ...
	E1206 19:14:00.264539   86706 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-593099" hosting pod "kube-proxy-thqkt" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-593099" has status "Ready":"False"
	I1206 19:14:00.264545   86706 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-tp2wm" in "kube-system" namespace to be "Ready" ...
	I1206 19:14:00.461007   86706 request.go:629] Waited for 196.3941ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tp2wm
	I1206 19:14:00.461112   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tp2wm
	I1206 19:14:00.461125   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:00.461135   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:00.461141   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:00.463907   86706 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:14:00.463928   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:00.463940   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:00.463946   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:00.463951   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:00 GMT
	I1206 19:14:00.463956   86706 round_trippers.go:580]     Audit-Id: 2b4e59b0-1b14-46cb-a9f4-85025f3da6b4
	I1206 19:14:00.463961   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:00.463966   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:00.464179   86706 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tp2wm","generateName":"kube-proxy-","namespace":"kube-system","uid":"366b51c9-af8f-4bd5-8200-dc43c4a3017c","resourceVersion":"676","creationTimestamp":"2023-12-06T19:05:15Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"9bd0b244-d31b-4ce9-a395-f0d7b9ee08be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:05:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9bd0b244-d31b-4ce9-a395-f0d7b9ee08be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I1206 19:14:00.661178   86706 request.go:629] Waited for 196.43337ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.125:8443/api/v1/nodes/multinode-593099-m03
	I1206 19:14:00.661258   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099-m03
	I1206 19:14:00.661265   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:00.661277   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:00.661286   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:00.664203   86706 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:14:00.664222   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:00.664229   86706 round_trippers.go:580]     Audit-Id: ab05a5dd-71f5-4e93-832a-b2d6287c3ab6
	I1206 19:14:00.664242   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:00.664250   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:00.664258   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:00.664267   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:00.664275   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:00 GMT
	I1206 19:14:00.664450   86706 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099-m03","uid":"a37befac-9ea6-49a7-a8c3-a9b16981befa","resourceVersion":"696","creationTimestamp":"2023-12-06T19:05:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_06T19_06_00_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:05:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 3965 chars]
	I1206 19:14:00.664731   86706 pod_ready.go:92] pod "kube-proxy-tp2wm" in "kube-system" namespace has status "Ready":"True"
	I1206 19:14:00.664748   86706 pod_ready.go:81] duration metric: took 400.197203ms waiting for pod "kube-proxy-tp2wm" in "kube-system" namespace to be "Ready" ...
	I1206 19:14:00.664756   86706 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-593099" in "kube-system" namespace to be "Ready" ...
	I1206 19:14:00.861238   86706 request.go:629] Waited for 196.387327ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-593099
	I1206 19:14:00.861307   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-593099
	I1206 19:14:00.861314   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:00.861327   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:00.861346   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:00.864173   86706 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:14:00.864197   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:00.864205   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:00.864211   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:00.864229   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:00.864237   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:00.864244   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:00 GMT
	I1206 19:14:00.864251   86706 round_trippers.go:580]     Audit-Id: bc7ce9ce-705d-4233-9b95-2ba8bdc68d2b
	I1206 19:14:00.864624   86706 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-593099","namespace":"kube-system","uid":"7ae8a659-33ba-4e2b-9211-8d84efe7e5a4","resourceVersion":"769","creationTimestamp":"2023-12-06T19:03:28Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"c031365adbae2937d228cc911fbfd7d4","kubernetes.io/config.mirror":"c031365adbae2937d228cc911fbfd7d4","kubernetes.io/config.seen":"2023-12-06T19:03:21.456083881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4928 chars]
	I1206 19:14:01.060358   86706 request.go:629] Waited for 195.30129ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:14:01.060427   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:14:01.060432   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:01.060441   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:01.060447   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:01.062967   86706 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:14:01.062990   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:01.062998   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:01.063006   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:01.063014   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:01.063021   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:01.063030   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:01 GMT
	I1206 19:14:01.063037   86706 round_trippers.go:580]     Audit-Id: 275a9dd9-c438-4dd7-9515-c4c70d0be0ae
	I1206 19:14:01.063312   86706 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"705","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1206 19:14:01.063746   86706 pod_ready.go:97] node "multinode-593099" hosting pod "kube-scheduler-multinode-593099" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-593099" has status "Ready":"False"
	I1206 19:14:01.063790   86706 pod_ready.go:81] duration metric: took 399.02799ms waiting for pod "kube-scheduler-multinode-593099" in "kube-system" namespace to be "Ready" ...
	E1206 19:14:01.063801   86706 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-593099" hosting pod "kube-scheduler-multinode-593099" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-593099" has status "Ready":"False"
	I1206 19:14:01.063809   86706 pod_ready.go:38] duration metric: took 1.595850455s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 19:14:01.063831   86706 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 19:14:01.079992   86706 command_runner.go:130] > -16
	I1206 19:14:01.080047   86706 ops.go:34] apiserver oom_adj: -16
	I1206 19:14:01.080057   86706 kubeadm.go:640] restartCluster took 21.661705794s
	I1206 19:14:01.080067   86706 kubeadm.go:406] StartCluster complete in 21.711680954s
	I1206 19:14:01.080093   86706 settings.go:142] acquiring lock: {Name:mkfeb988d43ca5824ac2b3af603600358ae0dd6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 19:14:01.080195   86706 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17740-63652/kubeconfig
	I1206 19:14:01.081112   86706 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-63652/kubeconfig: {Name:mkb891a2b2c86b4a1b0f4bb8fd4e51233eb9c683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 19:14:01.081420   86706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 19:14:01.081548   86706 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1206 19:14:01.083888   86706 out.go:177] * Enabled addons: 
	I1206 19:14:01.081774   86706 config.go:182] Loaded profile config "multinode-593099": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 19:14:01.081837   86706 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17740-63652/kubeconfig
	I1206 19:14:01.085500   86706 addons.go:502] enable addons completed in 3.985287ms: enabled=[]
	I1206 19:14:01.085836   86706 kapi.go:59] client config for multinode-593099: &rest.Config{Host:"https://192.168.39.125:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/client.crt", KeyFile:"/home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/client.key", CAFile:"/home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 19:14:01.086317   86706 round_trippers.go:463] GET https://192.168.39.125:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1206 19:14:01.086336   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:01.086347   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:01.086355   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:01.089525   86706 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1206 19:14:01.089542   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:01.089549   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:01.089554   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:01.089559   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:01.089565   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:01.089570   86706 round_trippers.go:580]     Content-Length: 291
	I1206 19:14:01.089575   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:01 GMT
	I1206 19:14:01.089587   86706 round_trippers.go:580]     Audit-Id: 041ad45d-17da-40ae-b3de-55d239dd406f
	I1206 19:14:01.089640   86706 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"914591c0-c4d9-4bf1-b4d5-c7cbc3153364","resourceVersion":"802","creationTimestamp":"2023-12-06T19:03:30Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1206 19:14:01.089804   86706 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-593099" context rescaled to 1 replicas
	I1206 19:14:01.089839   86706 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 19:14:01.092013   86706 out.go:177] * Verifying Kubernetes components...
	I1206 19:14:01.093586   86706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 19:14:01.235232   86706 command_runner.go:130] > apiVersion: v1
	I1206 19:14:01.235261   86706 command_runner.go:130] > data:
	I1206 19:14:01.235265   86706 command_runner.go:130] >   Corefile: |
	I1206 19:14:01.235269   86706 command_runner.go:130] >     .:53 {
	I1206 19:14:01.235273   86706 command_runner.go:130] >         log
	I1206 19:14:01.235277   86706 command_runner.go:130] >         errors
	I1206 19:14:01.235281   86706 command_runner.go:130] >         health {
	I1206 19:14:01.235292   86706 command_runner.go:130] >            lameduck 5s
	I1206 19:14:01.235299   86706 command_runner.go:130] >         }
	I1206 19:14:01.235311   86706 command_runner.go:130] >         ready
	I1206 19:14:01.235319   86706 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1206 19:14:01.235325   86706 command_runner.go:130] >            pods insecure
	I1206 19:14:01.235333   86706 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1206 19:14:01.235339   86706 command_runner.go:130] >            ttl 30
	I1206 19:14:01.235344   86706 command_runner.go:130] >         }
	I1206 19:14:01.235360   86706 command_runner.go:130] >         prometheus :9153
	I1206 19:14:01.235369   86706 command_runner.go:130] >         hosts {
	I1206 19:14:01.235377   86706 command_runner.go:130] >            192.168.39.1 host.minikube.internal
	I1206 19:14:01.235382   86706 command_runner.go:130] >            fallthrough
	I1206 19:14:01.235385   86706 command_runner.go:130] >         }
	I1206 19:14:01.235391   86706 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1206 19:14:01.235396   86706 command_runner.go:130] >            max_concurrent 1000
	I1206 19:14:01.235399   86706 command_runner.go:130] >         }
	I1206 19:14:01.235403   86706 command_runner.go:130] >         cache 30
	I1206 19:14:01.235414   86706 command_runner.go:130] >         loop
	I1206 19:14:01.235428   86706 command_runner.go:130] >         reload
	I1206 19:14:01.235439   86706 command_runner.go:130] >         loadbalance
	I1206 19:14:01.235442   86706 command_runner.go:130] >     }
	I1206 19:14:01.235446   86706 command_runner.go:130] > kind: ConfigMap
	I1206 19:14:01.235450   86706 command_runner.go:130] > metadata:
	I1206 19:14:01.235455   86706 command_runner.go:130] >   creationTimestamp: "2023-12-06T19:03:30Z"
	I1206 19:14:01.235459   86706 command_runner.go:130] >   name: coredns
	I1206 19:14:01.235463   86706 command_runner.go:130] >   namespace: kube-system
	I1206 19:14:01.235471   86706 command_runner.go:130] >   resourceVersion: "346"
	I1206 19:14:01.235476   86706 command_runner.go:130] >   uid: b66768a8-338a-4581-9dee-65cb570c9e23
	I1206 19:14:01.235583   86706 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1206 19:14:01.235582   86706 node_ready.go:35] waiting up to 6m0s for node "multinode-593099" to be "Ready" ...
	I1206 19:14:01.260970   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:14:01.260992   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:01.261001   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:01.261007   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:01.263774   86706 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:14:01.263802   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:01.263812   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:01.263821   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:01.263830   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:01.263838   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:01 GMT
	I1206 19:14:01.263847   86706 round_trippers.go:580]     Audit-Id: bc909ee0-c148-4e4b-82ee-19fdd6151efd
	I1206 19:14:01.263856   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:01.264020   86706 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"705","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1206 19:14:01.460995   86706 request.go:629] Waited for 196.426734ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:14:01.461070   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:14:01.461075   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:01.461086   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:01.461095   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:01.464282   86706 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1206 19:14:01.464305   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:01.464313   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:01.464319   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:01.464324   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:01.464329   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:01.464335   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:01 GMT
	I1206 19:14:01.464340   86706 round_trippers.go:580]     Audit-Id: 946504d3-46a8-4533-8c2a-f22840710bbe
	I1206 19:14:01.464522   86706 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"705","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1206 19:14:01.965581   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:14:01.965607   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:01.965615   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:01.965622   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:01.969277   86706 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1206 19:14:01.969305   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:01.969313   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:01 GMT
	I1206 19:14:01.969318   86706 round_trippers.go:580]     Audit-Id: 284ebaa2-c3ea-4cc5-86af-fb109783cd20
	I1206 19:14:01.969330   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:01.969340   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:01.969348   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:01.969356   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:01.969526   86706 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"705","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1206 19:14:02.465158   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:14:02.465184   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:02.465193   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:02.465199   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:02.467719   86706 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:14:02.467742   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:02.467749   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:02.467755   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:02 GMT
	I1206 19:14:02.467763   86706 round_trippers.go:580]     Audit-Id: b0077b04-6c3f-4f6c-8489-7276c07b0b38
	I1206 19:14:02.467770   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:02.467781   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:02.467786   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:02.468178   86706 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"705","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1206 19:14:02.965950   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:14:02.965978   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:02.965986   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:02.965992   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:02.969186   86706 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1206 19:14:02.969209   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:02.969217   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:02.969226   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:02.969252   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:02.969264   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:02 GMT
	I1206 19:14:02.969273   86706 round_trippers.go:580]     Audit-Id: afb93165-ea53-4339-b33d-4f85b534231b
	I1206 19:14:02.969284   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:02.969439   86706 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"821","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1206 19:14:02.969799   86706 node_ready.go:49] node "multinode-593099" has status "Ready":"True"
	I1206 19:14:02.969816   86706 node_ready.go:38] duration metric: took 1.734205877s waiting for node "multinode-593099" to be "Ready" ...
	I1206 19:14:02.969835   86706 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 19:14:02.969897   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods
	I1206 19:14:02.969907   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:02.969919   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:02.969933   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:02.973656   86706 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1206 19:14:02.973680   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:02.973689   86706 round_trippers.go:580]     Audit-Id: cdfb0913-c3ff-4e20-9f84-00e5e0e8dc98
	I1206 19:14:02.973697   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:02.973713   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:02.973721   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:02.973734   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:02.973742   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:02 GMT
	I1206 19:14:02.975816   86706 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"821"},"items":[{"metadata":{"name":"coredns-5dd5756b68-h6rcq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"85247dde-4cee-482e-8f9b-a9e8f4e7172e","resourceVersion":"767","creationTimestamp":"2023-12-06T19:03:43Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d4bc00ef-7482-4e80-b416-7475ddc04c5d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4bc00ef-7482-4e80-b416-7475ddc04c5d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82949 chars]
	I1206 19:14:02.978287   86706 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-h6rcq" in "kube-system" namespace to be "Ready" ...
	I1206 19:14:02.978378   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h6rcq
	I1206 19:14:02.978389   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:02.978400   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:02.978410   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:02.981193   86706 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:14:02.981210   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:02.981216   86706 round_trippers.go:580]     Audit-Id: 34b3993f-e9e2-4614-9634-cc0cec5cce7e
	I1206 19:14:02.981227   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:02.981249   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:02.981257   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:02.981271   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:02.981287   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:02 GMT
	I1206 19:14:02.981527   86706 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-h6rcq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"85247dde-4cee-482e-8f9b-a9e8f4e7172e","resourceVersion":"767","creationTimestamp":"2023-12-06T19:03:43Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d4bc00ef-7482-4e80-b416-7475ddc04c5d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4bc00ef-7482-4e80-b416-7475ddc04c5d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1206 19:14:02.981947   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:14:02.981965   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:02.981975   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:02.981983   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:02.984163   86706 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:14:02.984182   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:02.984191   86706 round_trippers.go:580]     Audit-Id: 32144891-dd0e-4b56-bdb2-01f9664b0925
	I1206 19:14:02.984199   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:02.984207   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:02.984214   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:02.984227   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:02.984234   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:02 GMT
	I1206 19:14:02.984383   86706 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"821","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1206 19:14:02.984805   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h6rcq
	I1206 19:14:02.984821   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:02.984828   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:02.984840   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:02.987426   86706 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:14:02.987442   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:02.987457   86706 round_trippers.go:580]     Audit-Id: 1ccca897-4159-4b0d-9c78-63c0494e2a84
	I1206 19:14:02.987467   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:02.987476   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:02.987489   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:02.987501   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:02.987514   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:02 GMT
	I1206 19:14:02.987659   86706 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-h6rcq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"85247dde-4cee-482e-8f9b-a9e8f4e7172e","resourceVersion":"767","creationTimestamp":"2023-12-06T19:03:43Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d4bc00ef-7482-4e80-b416-7475ddc04c5d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4bc00ef-7482-4e80-b416-7475ddc04c5d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1206 19:14:03.060303   86706 request.go:629] Waited for 72.201719ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:14:03.060404   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:14:03.060413   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:03.060421   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:03.060431   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:03.063398   86706 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:14:03.063419   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:03.063429   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:03.063437   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:03.063450   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:03.063465   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:03.063474   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:03 GMT
	I1206 19:14:03.063487   86706 round_trippers.go:580]     Audit-Id: c8c933c4-7b40-4b16-af85-a568b1cb14de
	I1206 19:14:03.063608   86706 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"821","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1206 19:14:03.564757   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h6rcq
	I1206 19:14:03.564792   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:03.564801   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:03.564807   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:03.569595   86706 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1206 19:14:03.569621   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:03.569642   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:03 GMT
	I1206 19:14:03.569650   86706 round_trippers.go:580]     Audit-Id: 0aff7347-743a-4aa2-88c7-bacc1bffd209
	I1206 19:14:03.569657   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:03.569665   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:03.569673   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:03.569682   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:03.569820   86706 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-h6rcq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"85247dde-4cee-482e-8f9b-a9e8f4e7172e","resourceVersion":"767","creationTimestamp":"2023-12-06T19:03:43Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d4bc00ef-7482-4e80-b416-7475ddc04c5d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4bc00ef-7482-4e80-b416-7475ddc04c5d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1206 19:14:03.570343   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:14:03.570364   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:03.570375   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:03.570383   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:03.572542   86706 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:14:03.572559   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:03.572565   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:03.572573   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:03.572581   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:03 GMT
	I1206 19:14:03.572589   86706 round_trippers.go:580]     Audit-Id: 9140b8be-5e20-48c8-a9d8-3f66ca8627b0
	I1206 19:14:03.572598   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:03.572611   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:03.572774   86706 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"821","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1206 19:14:04.064426   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h6rcq
	I1206 19:14:04.064452   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:04.064461   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:04.064467   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:04.067747   86706 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1206 19:14:04.067772   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:04.067785   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:04.067793   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:04.067800   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:04.067807   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:04 GMT
	I1206 19:14:04.067815   86706 round_trippers.go:580]     Audit-Id: e957e9f2-5a73-46ea-97c1-8a000fb5baa0
	I1206 19:14:04.067823   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:04.068089   86706 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-h6rcq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"85247dde-4cee-482e-8f9b-a9e8f4e7172e","resourceVersion":"767","creationTimestamp":"2023-12-06T19:03:43Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d4bc00ef-7482-4e80-b416-7475ddc04c5d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4bc00ef-7482-4e80-b416-7475ddc04c5d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1206 19:14:04.068558   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:14:04.068575   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:04.068583   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:04.068589   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:04.070610   86706 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:14:04.070631   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:04.070640   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:04.070648   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:04 GMT
	I1206 19:14:04.070657   86706 round_trippers.go:580]     Audit-Id: f8b3c997-96b7-4664-910a-3c5c690d86e5
	I1206 19:14:04.070675   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:04.070692   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:04.070707   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:04.070881   86706 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"821","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1206 19:14:04.564259   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h6rcq
	I1206 19:14:04.564286   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:04.564294   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:04.564301   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:04.568878   86706 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1206 19:14:04.568904   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:04.568913   86706 round_trippers.go:580]     Audit-Id: 8dc303f0-dfd6-4cdc-ab87-452344e829f6
	I1206 19:14:04.568921   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:04.568929   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:04.568936   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:04.568945   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:04.568953   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:04 GMT
	I1206 19:14:04.569477   86706 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-h6rcq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"85247dde-4cee-482e-8f9b-a9e8f4e7172e","resourceVersion":"767","creationTimestamp":"2023-12-06T19:03:43Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d4bc00ef-7482-4e80-b416-7475ddc04c5d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4bc00ef-7482-4e80-b416-7475ddc04c5d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1206 19:14:04.569954   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:14:04.569971   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:04.569979   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:04.569984   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:04.571958   86706 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1206 19:14:04.571979   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:04.571988   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:04.571996   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:04.572014   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:04 GMT
	I1206 19:14:04.572023   86706 round_trippers.go:580]     Audit-Id: ef3c4e17-0a3d-4c33-a146-7ae6364af8b5
	I1206 19:14:04.572032   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:04.572039   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:04.572197   86706 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"821","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1206 19:14:05.064265   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h6rcq
	I1206 19:14:05.064292   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:05.064300   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:05.064307   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:05.068677   86706 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1206 19:14:05.068706   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:05.068715   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:05.068723   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:05 GMT
	I1206 19:14:05.068731   86706 round_trippers.go:580]     Audit-Id: c43bd134-791b-4e81-8a16-7e4e2beac226
	I1206 19:14:05.068739   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:05.068747   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:05.068757   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:05.068878   86706 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-h6rcq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"85247dde-4cee-482e-8f9b-a9e8f4e7172e","resourceVersion":"767","creationTimestamp":"2023-12-06T19:03:43Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d4bc00ef-7482-4e80-b416-7475ddc04c5d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4bc00ef-7482-4e80-b416-7475ddc04c5d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1206 19:14:05.069368   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:14:05.069385   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:05.069396   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:05.069411   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:05.071900   86706 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:14:05.071925   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:05.071934   86706 round_trippers.go:580]     Audit-Id: 29788f37-49a0-4125-84fc-1711af507326
	I1206 19:14:05.071943   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:05.071951   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:05.071959   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:05.071966   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:05.071971   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:05 GMT
	I1206 19:14:05.072101   86706 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"821","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1206 19:14:05.072450   86706 pod_ready.go:102] pod "coredns-5dd5756b68-h6rcq" in "kube-system" namespace has status "Ready":"False"
	I1206 19:14:05.564401   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h6rcq
	I1206 19:14:05.564433   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:05.564443   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:05.564450   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:05.567616   86706 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1206 19:14:05.567644   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:05.567654   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:05.567661   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:05.567668   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:05 GMT
	I1206 19:14:05.567675   86706 round_trippers.go:580]     Audit-Id: 9c82ccc3-3764-4ef4-b88f-595de0bedf24
	I1206 19:14:05.567683   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:05.567691   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:05.568063   86706 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-h6rcq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"85247dde-4cee-482e-8f9b-a9e8f4e7172e","resourceVersion":"767","creationTimestamp":"2023-12-06T19:03:43Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d4bc00ef-7482-4e80-b416-7475ddc04c5d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4bc00ef-7482-4e80-b416-7475ddc04c5d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1206 19:14:05.568649   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:14:05.568666   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:05.568677   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:05.568687   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:05.570974   86706 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:14:05.570991   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:05.570997   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:05.571003   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:05.571008   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:05 GMT
	I1206 19:14:05.571013   86706 round_trippers.go:580]     Audit-Id: 68730d78-d5fe-49ca-a09e-546f9d65a136
	I1206 19:14:05.571018   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:05.571024   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:05.571153   86706 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"821","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1206 19:14:06.064872   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h6rcq
	I1206 19:14:06.064898   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:06.064908   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:06.064914   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:06.067911   86706 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:14:06.067936   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:06.067943   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:06.067949   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:06.067955   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:06 GMT
	I1206 19:14:06.067960   86706 round_trippers.go:580]     Audit-Id: e142c925-4457-431b-b7c1-328b3f753478
	I1206 19:14:06.067965   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:06.067973   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:06.068213   86706 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-h6rcq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"85247dde-4cee-482e-8f9b-a9e8f4e7172e","resourceVersion":"767","creationTimestamp":"2023-12-06T19:03:43Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d4bc00ef-7482-4e80-b416-7475ddc04c5d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4bc00ef-7482-4e80-b416-7475ddc04c5d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1206 19:14:06.068825   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:14:06.068849   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:06.068859   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:06.068872   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:06.071489   86706 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:14:06.071511   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:06.071521   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:06.071535   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:06 GMT
	I1206 19:14:06.071544   86706 round_trippers.go:580]     Audit-Id: f1f9b5e0-aa84-4911-bdc4-8d39f10a391a
	I1206 19:14:06.071553   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:06.071561   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:06.071569   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:06.071680   86706 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"821","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1206 19:14:06.564527   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h6rcq
	I1206 19:14:06.564554   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:06.564563   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:06.564569   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:06.567406   86706 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:14:06.567436   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:06.567446   86706 round_trippers.go:580]     Audit-Id: 3232ad33-d611-45cc-be05-e688c556ee5b
	I1206 19:14:06.567455   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:06.567463   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:06.567471   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:06.567478   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:06.567487   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:06 GMT
	I1206 19:14:06.568146   86706 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-h6rcq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"85247dde-4cee-482e-8f9b-a9e8f4e7172e","resourceVersion":"767","creationTimestamp":"2023-12-06T19:03:43Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d4bc00ef-7482-4e80-b416-7475ddc04c5d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4bc00ef-7482-4e80-b416-7475ddc04c5d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1206 19:14:06.568686   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:14:06.568702   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:06.568710   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:06.568716   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:06.570984   86706 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:14:06.570999   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:06.571008   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:06.571017   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:06 GMT
	I1206 19:14:06.571024   86706 round_trippers.go:580]     Audit-Id: f32227f1-ff68-49ce-bfde-1b0b47e128b6
	I1206 19:14:06.571032   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:06.571040   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:06.571049   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:06.571382   86706 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"821","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1206 19:14:07.065108   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h6rcq
	I1206 19:14:07.065134   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:07.065142   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:07.065148   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:07.072861   86706 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1206 19:14:07.072890   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:07.072900   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:07.072907   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:07.072915   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:07.072924   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:07 GMT
	I1206 19:14:07.072932   86706 round_trippers.go:580]     Audit-Id: c2b706ee-07a9-4f33-9390-d9d61b167bca
	I1206 19:14:07.072941   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:07.073381   86706 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-h6rcq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"85247dde-4cee-482e-8f9b-a9e8f4e7172e","resourceVersion":"828","creationTimestamp":"2023-12-06T19:03:43Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d4bc00ef-7482-4e80-b416-7475ddc04c5d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4bc00ef-7482-4e80-b416-7475ddc04c5d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I1206 19:14:07.074039   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:14:07.074077   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:07.074089   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:07.074104   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:07.078506   86706 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1206 19:14:07.078524   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:07.078530   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:07 GMT
	I1206 19:14:07.078536   86706 round_trippers.go:580]     Audit-Id: 774ad3ea-0a53-48ad-a6c2-fde5ea8c685a
	I1206 19:14:07.078540   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:07.078545   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:07.078550   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:07.078555   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:07.078756   86706 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"821","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1206 19:14:07.079237   86706 pod_ready.go:92] pod "coredns-5dd5756b68-h6rcq" in "kube-system" namespace has status "Ready":"True"
	I1206 19:14:07.079266   86706 pod_ready.go:81] duration metric: took 4.100956312s waiting for pod "coredns-5dd5756b68-h6rcq" in "kube-system" namespace to be "Ready" ...
	I1206 19:14:07.079284   86706 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-593099" in "kube-system" namespace to be "Ready" ...
	I1206 19:14:07.079367   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-593099
	I1206 19:14:07.079379   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:07.079393   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:07.079405   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:07.082067   86706 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:14:07.082084   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:07.082091   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:07.082096   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:07.082101   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:07 GMT
	I1206 19:14:07.082106   86706 round_trippers.go:580]     Audit-Id: 3282e6d1-c84f-496c-8409-022ca3a54a5c
	I1206 19:14:07.082111   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:07.082116   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:07.082307   86706 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-593099","namespace":"kube-system","uid":"17573829-76f1-4718-80d6-248db178e8d0","resourceVersion":"765","creationTimestamp":"2023-12-06T19:03:29Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.125:2379","kubernetes.io/config.hash":"9ce14df981100c86a2ade94d91a33196","kubernetes.io/config.mirror":"9ce14df981100c86a2ade94d91a33196","kubernetes.io/config.seen":"2023-12-06T19:03:21.456077539Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I1206 19:14:07.082834   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:14:07.082854   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:07.082865   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:07.082874   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:07.085808   86706 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:14:07.085826   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:07.085840   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:07.085855   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:07.085862   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:07 GMT
	I1206 19:14:07.085870   86706 round_trippers.go:580]     Audit-Id: 68397149-d1c9-47eb-a4cc-ccd994c6edc0
	I1206 19:14:07.085875   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:07.085880   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:07.086070   86706 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"821","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1206 19:14:07.086397   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-593099
	I1206 19:14:07.086411   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:07.086421   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:07.086430   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:07.089140   86706 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:14:07.089162   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:07.089173   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:07.089190   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:07.089198   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:07.089210   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:07.089220   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:07 GMT
	I1206 19:14:07.089241   86706 round_trippers.go:580]     Audit-Id: 6d77f436-36fe-4b81-9a79-59a6d7ee1e10
	I1206 19:14:07.090804   86706 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-593099","namespace":"kube-system","uid":"17573829-76f1-4718-80d6-248db178e8d0","resourceVersion":"765","creationTimestamp":"2023-12-06T19:03:29Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.125:2379","kubernetes.io/config.hash":"9ce14df981100c86a2ade94d91a33196","kubernetes.io/config.mirror":"9ce14df981100c86a2ade94d91a33196","kubernetes.io/config.seen":"2023-12-06T19:03:21.456077539Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I1206 19:14:07.091160   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:14:07.091175   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:07.091186   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:07.091196   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:07.098738   86706 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1206 19:14:07.098763   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:07.098771   86706 round_trippers.go:580]     Audit-Id: 9756f76e-f12b-447a-9f71-85bffc55be06
	I1206 19:14:07.098776   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:07.098781   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:07.098786   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:07.098791   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:07.098796   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:07 GMT
	I1206 19:14:07.098933   86706 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"821","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1206 19:14:07.600106   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-593099
	I1206 19:14:07.600140   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:07.600150   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:07.600156   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:07.603092   86706 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:14:07.603114   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:07.603124   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:07 GMT
	I1206 19:14:07.603132   86706 round_trippers.go:580]     Audit-Id: ae7abfec-379b-466b-962a-a9065a2c870c
	I1206 19:14:07.603139   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:07.603146   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:07.603153   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:07.603162   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:07.603377   86706 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-593099","namespace":"kube-system","uid":"17573829-76f1-4718-80d6-248db178e8d0","resourceVersion":"765","creationTimestamp":"2023-12-06T19:03:29Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.125:2379","kubernetes.io/config.hash":"9ce14df981100c86a2ade94d91a33196","kubernetes.io/config.mirror":"9ce14df981100c86a2ade94d91a33196","kubernetes.io/config.seen":"2023-12-06T19:03:21.456077539Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I1206 19:14:07.603909   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:14:07.603927   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:07.603935   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:07.603942   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:07.606343   86706 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:14:07.606362   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:07.606368   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:07.606373   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:07.606378   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:07.606383   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:07 GMT
	I1206 19:14:07.606388   86706 round_trippers.go:580]     Audit-Id: c5331529-8fc2-4908-a8a1-ec088cf2da1f
	I1206 19:14:07.606394   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:07.606697   86706 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"821","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1206 19:14:08.100419   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-593099
	I1206 19:14:08.100448   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:08.100457   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:08.100463   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:08.103589   86706 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1206 19:14:08.103615   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:08.103625   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:08 GMT
	I1206 19:14:08.103634   86706 round_trippers.go:580]     Audit-Id: 24ef6e13-404b-48cf-8649-322c2387bc36
	I1206 19:14:08.103642   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:08.103651   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:08.103667   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:08.103678   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:08.103910   86706 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-593099","namespace":"kube-system","uid":"17573829-76f1-4718-80d6-248db178e8d0","resourceVersion":"765","creationTimestamp":"2023-12-06T19:03:29Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.125:2379","kubernetes.io/config.hash":"9ce14df981100c86a2ade94d91a33196","kubernetes.io/config.mirror":"9ce14df981100c86a2ade94d91a33196","kubernetes.io/config.seen":"2023-12-06T19:03:21.456077539Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I1206 19:14:08.104407   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:14:08.104430   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:08.104440   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:08.104446   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:08.106835   86706 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:14:08.106857   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:08.106867   86706 round_trippers.go:580]     Audit-Id: 92068d47-53fb-4fda-b994-2087c46aa6d4
	I1206 19:14:08.106875   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:08.106885   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:08.106898   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:08.106909   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:08.106918   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:08 GMT
	I1206 19:14:08.107080   86706 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"821","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1206 19:14:08.599697   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-593099
	I1206 19:14:08.599729   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:08.599739   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:08.599747   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:08.602502   86706 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:14:08.602531   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:08.602549   86706 round_trippers.go:580]     Audit-Id: c03eefba-dbf9-4ed0-a1d2-1c100ed96e4c
	I1206 19:14:08.602556   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:08.602561   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:08.602569   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:08.602577   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:08.602588   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:08 GMT
	I1206 19:14:08.602773   86706 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-593099","namespace":"kube-system","uid":"17573829-76f1-4718-80d6-248db178e8d0","resourceVersion":"765","creationTimestamp":"2023-12-06T19:03:29Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.125:2379","kubernetes.io/config.hash":"9ce14df981100c86a2ade94d91a33196","kubernetes.io/config.mirror":"9ce14df981100c86a2ade94d91a33196","kubernetes.io/config.seen":"2023-12-06T19:03:21.456077539Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I1206 19:14:08.603177   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:14:08.603190   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:08.603198   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:08.603207   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:08.605727   86706 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:14:08.605746   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:08.605756   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:08.605764   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:08.605771   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:08 GMT
	I1206 19:14:08.605782   86706 round_trippers.go:580]     Audit-Id: 31988a3e-f92f-4772-8e49-6abfda4dad77
	I1206 19:14:08.605792   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:08.605813   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:08.606179   86706 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"821","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1206 19:14:09.099832   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-593099
	I1206 19:14:09.099857   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:09.099866   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:09.099872   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:09.108907   86706 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1206 19:14:09.108940   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:09.108952   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:09.108960   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:09.108968   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:09.108976   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:09 GMT
	I1206 19:14:09.108984   86706 round_trippers.go:580]     Audit-Id: 99b7a92b-d7fd-45f5-8ee5-6d5a8d6608b5
	I1206 19:14:09.108992   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:09.109200   86706 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-593099","namespace":"kube-system","uid":"17573829-76f1-4718-80d6-248db178e8d0","resourceVersion":"765","creationTimestamp":"2023-12-06T19:03:29Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.125:2379","kubernetes.io/config.hash":"9ce14df981100c86a2ade94d91a33196","kubernetes.io/config.mirror":"9ce14df981100c86a2ade94d91a33196","kubernetes.io/config.seen":"2023-12-06T19:03:21.456077539Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I1206 19:14:09.109766   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:14:09.109789   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:09.109800   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:09.109810   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:09.112666   86706 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:14:09.112685   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:09.112694   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:09.112702   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:09.112709   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:09 GMT
	I1206 19:14:09.112729   86706 round_trippers.go:580]     Audit-Id: 70c2c1b0-5cd9-4de6-9963-8255950a06de
	I1206 19:14:09.112738   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:09.112748   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:09.112938   86706 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"821","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1206 19:14:09.113248   86706 pod_ready.go:102] pod "etcd-multinode-593099" in "kube-system" namespace has status "Ready":"False"
	I1206 19:14:09.600294   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-593099
	I1206 19:14:09.600322   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:09.600333   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:09.600341   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:09.603272   86706 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:14:09.603304   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:09.603313   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:09.603321   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:09.603329   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:09 GMT
	I1206 19:14:09.603337   86706 round_trippers.go:580]     Audit-Id: e423cb59-9aef-4bb7-87c5-82f9d0976f7a
	I1206 19:14:09.603345   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:09.603370   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:09.603563   86706 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-593099","namespace":"kube-system","uid":"17573829-76f1-4718-80d6-248db178e8d0","resourceVersion":"765","creationTimestamp":"2023-12-06T19:03:29Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.125:2379","kubernetes.io/config.hash":"9ce14df981100c86a2ade94d91a33196","kubernetes.io/config.mirror":"9ce14df981100c86a2ade94d91a33196","kubernetes.io/config.seen":"2023-12-06T19:03:21.456077539Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I1206 19:14:09.603958   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:14:09.603971   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:09.603978   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:09.603984   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:09.606378   86706 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:14:09.606409   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:09.606420   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:09 GMT
	I1206 19:14:09.606426   86706 round_trippers.go:580]     Audit-Id: 07a0b5ff-99c5-4493-aab1-d89a60601cf7
	I1206 19:14:09.606434   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:09.606440   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:09.606445   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:09.606450   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:09.606598   86706 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"821","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1206 19:14:10.100269   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-593099
	I1206 19:14:10.100297   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:10.100306   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:10.100315   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:10.103148   86706 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:14:10.103170   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:10.103178   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:10 GMT
	I1206 19:14:10.103183   86706 round_trippers.go:580]     Audit-Id: 18ca4b2f-7898-4b32-a8c0-fac5065e5d08
	I1206 19:14:10.103188   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:10.103194   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:10.103199   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:10.103204   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:10.103478   86706 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-593099","namespace":"kube-system","uid":"17573829-76f1-4718-80d6-248db178e8d0","resourceVersion":"765","creationTimestamp":"2023-12-06T19:03:29Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.125:2379","kubernetes.io/config.hash":"9ce14df981100c86a2ade94d91a33196","kubernetes.io/config.mirror":"9ce14df981100c86a2ade94d91a33196","kubernetes.io/config.seen":"2023-12-06T19:03:21.456077539Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I1206 19:14:10.104040   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:14:10.104061   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:10.104072   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:10.104082   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:10.106534   86706 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:14:10.106553   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:10.106563   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:10 GMT
	I1206 19:14:10.106571   86706 round_trippers.go:580]     Audit-Id: c104f597-b75b-406c-9459-87762030644e
	I1206 19:14:10.106579   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:10.106587   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:10.106599   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:10.106609   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:10.107053   86706 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"821","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1206 19:14:10.599802   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-593099
	I1206 19:14:10.599839   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:10.599851   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:10.599860   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:10.603299   86706 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1206 19:14:10.603326   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:10.603337   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:10.603345   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:10 GMT
	I1206 19:14:10.603359   86706 round_trippers.go:580]     Audit-Id: 565dbca2-028f-47ed-85c9-4ca53f47e911
	I1206 19:14:10.603366   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:10.603374   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:10.603383   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:10.603539   86706 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-593099","namespace":"kube-system","uid":"17573829-76f1-4718-80d6-248db178e8d0","resourceVersion":"765","creationTimestamp":"2023-12-06T19:03:29Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.125:2379","kubernetes.io/config.hash":"9ce14df981100c86a2ade94d91a33196","kubernetes.io/config.mirror":"9ce14df981100c86a2ade94d91a33196","kubernetes.io/config.seen":"2023-12-06T19:03:21.456077539Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I1206 19:14:10.604081   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:14:10.604102   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:10.604114   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:10.604124   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:10.606561   86706 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:14:10.606594   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:10.606605   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:10.606613   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:10.606622   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:10.606631   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:10.606643   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:10 GMT
	I1206 19:14:10.606651   86706 round_trippers.go:580]     Audit-Id: cd765e95-96da-4d94-b1bd-173873b56681
	I1206 19:14:10.607091   86706 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"821","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1206 19:14:11.099638   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-593099
	I1206 19:14:11.099672   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:11.099685   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:11.099694   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:11.103185   86706 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1206 19:14:11.103204   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:11.103218   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:11.103234   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:11.103246   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:11.103256   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:11.103268   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:11 GMT
	I1206 19:14:11.103278   86706 round_trippers.go:580]     Audit-Id: 572a9988-d188-4e24-8d6f-8dcc228150a5
	I1206 19:14:11.103485   86706 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-593099","namespace":"kube-system","uid":"17573829-76f1-4718-80d6-248db178e8d0","resourceVersion":"765","creationTimestamp":"2023-12-06T19:03:29Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.125:2379","kubernetes.io/config.hash":"9ce14df981100c86a2ade94d91a33196","kubernetes.io/config.mirror":"9ce14df981100c86a2ade94d91a33196","kubernetes.io/config.seen":"2023-12-06T19:03:21.456077539Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I1206 19:14:11.104004   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:14:11.104021   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:11.104028   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:11.104035   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:11.106244   86706 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:14:11.106259   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:11.106266   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:11.106272   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:11.106277   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:11.106282   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:11 GMT
	I1206 19:14:11.106287   86706 round_trippers.go:580]     Audit-Id: 8d4b7754-a69d-4b2e-9ec6-4a7d283f7445
	I1206 19:14:11.106293   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:11.106570   86706 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"821","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1206 19:14:11.599868   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-593099
	I1206 19:14:11.599910   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:11.599919   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:11.599934   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:11.603352   86706 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1206 19:14:11.603377   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:11.603385   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:11.603390   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:11.603395   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:11.603401   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:11 GMT
	I1206 19:14:11.603406   86706 round_trippers.go:580]     Audit-Id: 3ae218c6-eeb9-4854-9da9-ca5848a2b76a
	I1206 19:14:11.603412   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:11.603921   86706 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-593099","namespace":"kube-system","uid":"17573829-76f1-4718-80d6-248db178e8d0","resourceVersion":"765","creationTimestamp":"2023-12-06T19:03:29Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.125:2379","kubernetes.io/config.hash":"9ce14df981100c86a2ade94d91a33196","kubernetes.io/config.mirror":"9ce14df981100c86a2ade94d91a33196","kubernetes.io/config.seen":"2023-12-06T19:03:21.456077539Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I1206 19:14:11.604304   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:14:11.604317   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:11.604324   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:11.604330   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:11.607485   86706 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1206 19:14:11.607499   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:11.607505   86706 round_trippers.go:580]     Audit-Id: eeacbc52-427a-44b0-a528-ed11af1a32c6
	I1206 19:14:11.607511   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:11.607516   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:11.607521   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:11.607526   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:11.607539   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:11 GMT
	I1206 19:14:11.607834   86706 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"821","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1206 19:14:11.608121   86706 pod_ready.go:102] pod "etcd-multinode-593099" in "kube-system" namespace has status "Ready":"False"
	I1206 19:14:12.100005   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-593099
	I1206 19:14:12.100029   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:12.100037   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:12.100044   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:12.103443   86706 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1206 19:14:12.103481   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:12.103493   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:12.103501   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:12 GMT
	I1206 19:14:12.103509   86706 round_trippers.go:580]     Audit-Id: 5e5da1f4-1437-4ba3-a06b-19def71f5c1a
	I1206 19:14:12.103517   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:12.103529   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:12.103538   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:12.104017   86706 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-593099","namespace":"kube-system","uid":"17573829-76f1-4718-80d6-248db178e8d0","resourceVersion":"848","creationTimestamp":"2023-12-06T19:03:29Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.125:2379","kubernetes.io/config.hash":"9ce14df981100c86a2ade94d91a33196","kubernetes.io/config.mirror":"9ce14df981100c86a2ade94d91a33196","kubernetes.io/config.seen":"2023-12-06T19:03:21.456077539Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I1206 19:14:12.104399   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:14:12.104410   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:12.104418   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:12.104424   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:12.106528   86706 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:14:12.106547   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:12.106553   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:12 GMT
	I1206 19:14:12.106559   86706 round_trippers.go:580]     Audit-Id: ab209c62-df27-4878-a05a-63861a7b260d
	I1206 19:14:12.106564   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:12.106569   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:12.106573   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:12.106578   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:12.106734   86706 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"821","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1206 19:14:12.107048   86706 pod_ready.go:92] pod "etcd-multinode-593099" in "kube-system" namespace has status "Ready":"True"
	I1206 19:14:12.107076   86706 pod_ready.go:81] duration metric: took 5.027773137s waiting for pod "etcd-multinode-593099" in "kube-system" namespace to be "Ready" ...
	I1206 19:14:12.107097   86706 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-593099" in "kube-system" namespace to be "Ready" ...
	I1206 19:14:12.107151   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-593099
	I1206 19:14:12.107159   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:12.107166   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:12.107172   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:12.109326   86706 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:14:12.109344   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:12.109353   86706 round_trippers.go:580]     Audit-Id: 3d49a395-ff9c-40d5-8180-2985a7ca8b15
	I1206 19:14:12.109362   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:12.109369   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:12.109376   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:12.109383   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:12.109392   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:12 GMT
	I1206 19:14:12.109649   86706 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-593099","namespace":"kube-system","uid":"c32eea84-5395-4ffd-9fe4-51ae29b0861c","resourceVersion":"839","creationTimestamp":"2023-12-06T19:03:31Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.125:8443","kubernetes.io/config.hash":"6290493e5e32b3d1986ab88f381ba97f","kubernetes.io/config.mirror":"6290493e5e32b3d1986ab88f381ba97f","kubernetes.io/config.seen":"2023-12-06T19:03:30.652197401Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I1206 19:14:12.110058   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:14:12.110071   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:12.110078   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:12.110084   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:12.112184   86706 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:14:12.112200   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:12.112207   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:12.112212   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:12 GMT
	I1206 19:14:12.112217   86706 round_trippers.go:580]     Audit-Id: 4e0ed0b7-5837-407a-9f9a-456018030a6f
	I1206 19:14:12.112222   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:12.112227   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:12.112232   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:12.112415   86706 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"821","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1206 19:14:12.112747   86706 pod_ready.go:92] pod "kube-apiserver-multinode-593099" in "kube-system" namespace has status "Ready":"True"
	I1206 19:14:12.112764   86706 pod_ready.go:81] duration metric: took 5.659482ms waiting for pod "kube-apiserver-multinode-593099" in "kube-system" namespace to be "Ready" ...
	I1206 19:14:12.112772   86706 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-593099" in "kube-system" namespace to be "Ready" ...
	I1206 19:14:12.112816   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-593099
	I1206 19:14:12.112824   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:12.112830   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:12.112837   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:12.114675   86706 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1206 19:14:12.114689   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:12.114695   86706 round_trippers.go:580]     Audit-Id: 252785cc-a1ae-4977-bc5b-0b01dd89a0d8
	I1206 19:14:12.114700   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:12.114705   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:12.114710   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:12.114715   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:12.114720   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:12 GMT
	I1206 19:14:12.115206   86706 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-593099","namespace":"kube-system","uid":"bd10545f-240d-418a-b4ca-a48c978a56c9","resourceVersion":"826","creationTimestamp":"2023-12-06T19:03:31Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e0f1a77aff616164d10d488d27b08307","kubernetes.io/config.mirror":"e0f1a77aff616164d10d488d27b08307","kubernetes.io/config.seen":"2023-12-06T19:03:30.652198715Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I1206 19:14:12.115569   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:14:12.115582   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:12.115589   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:12.115595   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:12.117490   86706 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1206 19:14:12.117509   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:12.117517   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:12.117526   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:12.117534   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:12 GMT
	I1206 19:14:12.117542   86706 round_trippers.go:580]     Audit-Id: 7780de8f-d4f0-43dc-b4ae-bd6841a046bf
	I1206 19:14:12.117551   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:12.117564   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:12.117944   86706 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"821","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1206 19:14:12.118305   86706 pod_ready.go:92] pod "kube-controller-manager-multinode-593099" in "kube-system" namespace has status "Ready":"True"
	I1206 19:14:12.118324   86706 pod_ready.go:81] duration metric: took 5.546257ms waiting for pod "kube-controller-manager-multinode-593099" in "kube-system" namespace to be "Ready" ...
	I1206 19:14:12.118334   86706 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ggxmb" in "kube-system" namespace to be "Ready" ...
	I1206 19:14:12.118385   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ggxmb
	I1206 19:14:12.118393   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:12.118401   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:12.118407   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:12.128642   86706 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1206 19:14:12.128663   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:12.128670   86706 round_trippers.go:580]     Audit-Id: 79b373dc-5151-4fce-92b8-d03d9df4c6e2
	I1206 19:14:12.128676   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:12.128681   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:12.128686   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:12.128691   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:12.128696   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:12 GMT
	I1206 19:14:12.128860   86706 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ggxmb","generateName":"kube-proxy-","namespace":"kube-system","uid":"9967a10f-783d-4e8f-bb49-f609c7227307","resourceVersion":"470","creationTimestamp":"2023-12-06T19:04:27Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"9bd0b244-d31b-4ce9-a395-f0d7b9ee08be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:04:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9bd0b244-d31b-4ce9-a395-f0d7b9ee08be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I1206 19:14:12.260558   86706 request.go:629] Waited for 131.309952ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.125:8443/api/v1/nodes/multinode-593099-m02
	I1206 19:14:12.260639   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099-m02
	I1206 19:14:12.260645   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:12.260655   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:12.260667   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:12.264164   86706 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1206 19:14:12.264186   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:12.264193   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:12.264199   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:12 GMT
	I1206 19:14:12.264204   86706 round_trippers.go:580]     Audit-Id: 48887fb5-de66-4a00-9991-075316d4be1a
	I1206 19:14:12.264209   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:12.264214   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:12.264219   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:12.264384   86706 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099-m02","uid":"4f57a17b-3ee2-40b9-bc65-252760c4ac03","resourceVersion":"702","creationTimestamp":"2023-12-06T19:04:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_06T19_06_00_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:04:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 4234 chars]
	I1206 19:14:12.264718   86706 pod_ready.go:92] pod "kube-proxy-ggxmb" in "kube-system" namespace has status "Ready":"True"
	I1206 19:14:12.264736   86706 pod_ready.go:81] duration metric: took 146.394431ms waiting for pod "kube-proxy-ggxmb" in "kube-system" namespace to be "Ready" ...
	I1206 19:14:12.264745   86706 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-thqkt" in "kube-system" namespace to be "Ready" ...
	I1206 19:14:12.461225   86706 request.go:629] Waited for 196.408403ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/kube-proxy-thqkt
	I1206 19:14:12.461313   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/kube-proxy-thqkt
	I1206 19:14:12.461322   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:12.461330   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:12.461336   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:12.463797   86706 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:14:12.463822   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:12.463832   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:12.463840   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:12.463848   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:12.463854   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:12.463867   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:12 GMT
	I1206 19:14:12.463884   86706 round_trippers.go:580]     Audit-Id: 6e0c1ff7-4c27-4e1c-be0b-6dbffd92d68c
	I1206 19:14:12.464064   86706 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-thqkt","generateName":"kube-proxy-","namespace":"kube-system","uid":"0012fda4-56e7-4054-ab90-1704569e66e8","resourceVersion":"809","creationTimestamp":"2023-12-06T19:03:43Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"9bd0b244-d31b-4ce9-a395-f0d7b9ee08be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9bd0b244-d31b-4ce9-a395-f0d7b9ee08be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I1206 19:14:12.660526   86706 request.go:629] Waited for 195.951189ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:14:12.660592   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:14:12.660600   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:12.660608   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:12.660615   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:12.664032   86706 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1206 19:14:12.664054   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:12.664061   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:12.664066   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:12 GMT
	I1206 19:14:12.664072   86706 round_trippers.go:580]     Audit-Id: d7ca7487-5729-4a52-9d0e-d125cdf623ef
	I1206 19:14:12.664077   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:12.664082   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:12.664087   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:12.664259   86706 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"821","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1206 19:14:12.664654   86706 pod_ready.go:92] pod "kube-proxy-thqkt" in "kube-system" namespace has status "Ready":"True"
	I1206 19:14:12.664674   86706 pod_ready.go:81] duration metric: took 399.92302ms waiting for pod "kube-proxy-thqkt" in "kube-system" namespace to be "Ready" ...
	I1206 19:14:12.664683   86706 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tp2wm" in "kube-system" namespace to be "Ready" ...
	I1206 19:14:12.861112   86706 request.go:629] Waited for 196.367207ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tp2wm
	I1206 19:14:12.861271   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tp2wm
	I1206 19:14:12.861285   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:12.861296   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:12.861306   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:12.864407   86706 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1206 19:14:12.864433   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:12.864443   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:12 GMT
	I1206 19:14:12.864452   86706 round_trippers.go:580]     Audit-Id: 0502de40-29f3-413a-83da-6d525b2531f0
	I1206 19:14:12.864460   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:12.864468   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:12.864476   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:12.864484   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:12.864900   86706 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tp2wm","generateName":"kube-proxy-","namespace":"kube-system","uid":"366b51c9-af8f-4bd5-8200-dc43c4a3017c","resourceVersion":"676","creationTimestamp":"2023-12-06T19:05:15Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"9bd0b244-d31b-4ce9-a395-f0d7b9ee08be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:05:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9bd0b244-d31b-4ce9-a395-f0d7b9ee08be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I1206 19:14:13.060880   86706 request.go:629] Waited for 195.418853ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.125:8443/api/v1/nodes/multinode-593099-m03
	I1206 19:14:13.060950   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099-m03
	I1206 19:14:13.060955   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:13.060963   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:13.060971   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:13.064189   86706 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1206 19:14:13.064218   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:13.064228   86706 round_trippers.go:580]     Audit-Id: 651d2233-c4a6-440e-9a5a-79efc4feafe8
	I1206 19:14:13.064236   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:13.064244   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:13.064253   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:13.064261   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:13.064269   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:13 GMT
	I1206 19:14:13.064390   86706 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099-m03","uid":"a37befac-9ea6-49a7-a8c3-a9b16981befa","resourceVersion":"696","creationTimestamp":"2023-12-06T19:05:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_06T19_06_00_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:05:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 3965 chars]
	I1206 19:14:13.064798   86706 pod_ready.go:92] pod "kube-proxy-tp2wm" in "kube-system" namespace has status "Ready":"True"
	I1206 19:14:13.064823   86706 pod_ready.go:81] duration metric: took 400.134781ms waiting for pod "kube-proxy-tp2wm" in "kube-system" namespace to be "Ready" ...
	I1206 19:14:13.064833   86706 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-593099" in "kube-system" namespace to be "Ready" ...
	I1206 19:14:13.261311   86706 request.go:629] Waited for 196.364499ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-593099
	I1206 19:14:13.261372   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-593099
	I1206 19:14:13.261377   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:13.261385   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:13.261391   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:13.264245   86706 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:14:13.264275   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:13.264285   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:13 GMT
	I1206 19:14:13.264292   86706 round_trippers.go:580]     Audit-Id: 783bd7ad-668e-46f5-9a22-68cfc5591a96
	I1206 19:14:13.264299   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:13.264306   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:13.264313   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:13.264320   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:13.264549   86706 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-593099","namespace":"kube-system","uid":"7ae8a659-33ba-4e2b-9211-8d84efe7e5a4","resourceVersion":"831","creationTimestamp":"2023-12-06T19:03:28Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"c031365adbae2937d228cc911fbfd7d4","kubernetes.io/config.mirror":"c031365adbae2937d228cc911fbfd7d4","kubernetes.io/config.seen":"2023-12-06T19:03:21.456083881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I1206 19:14:13.461316   86706 request.go:629] Waited for 196.384722ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:14:13.461383   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:14:13.461388   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:13.461396   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:13.461402   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:13.464194   86706 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:14:13.464219   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:13.464227   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:13.464232   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:13.464237   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:13.464243   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:13 GMT
	I1206 19:14:13.464251   86706 round_trippers.go:580]     Audit-Id: dc0c9487-286a-4a51-b2ab-07b36702256f
	I1206 19:14:13.464260   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:13.464739   86706 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"821","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1206 19:14:13.465160   86706 pod_ready.go:92] pod "kube-scheduler-multinode-593099" in "kube-system" namespace has status "Ready":"True"
	I1206 19:14:13.465179   86706 pod_ready.go:81] duration metric: took 400.339637ms waiting for pod "kube-scheduler-multinode-593099" in "kube-system" namespace to be "Ready" ...
	I1206 19:14:13.465195   86706 pod_ready.go:38] duration metric: took 10.495343251s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 19:14:13.465218   86706 api_server.go:52] waiting for apiserver process to appear ...
	I1206 19:14:13.465313   86706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:14:13.479906   86706 command_runner.go:130] > 1067
	I1206 19:14:13.479950   86706 api_server.go:72] duration metric: took 12.3900888s to wait for apiserver process to appear ...
	I1206 19:14:13.479960   86706 api_server.go:88] waiting for apiserver healthz status ...
	I1206 19:14:13.479984   86706 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I1206 19:14:13.485510   86706 api_server.go:279] https://192.168.39.125:8443/healthz returned 200:
	ok
	I1206 19:14:13.485581   86706 round_trippers.go:463] GET https://192.168.39.125:8443/version
	I1206 19:14:13.485589   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:13.485596   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:13.485602   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:13.486667   86706 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1206 19:14:13.486686   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:13.486695   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:13 GMT
	I1206 19:14:13.486708   86706 round_trippers.go:580]     Audit-Id: fc79f83d-e461-489f-b765-2f7df275836b
	I1206 19:14:13.486715   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:13.486723   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:13.486732   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:13.486741   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:13.486749   86706 round_trippers.go:580]     Content-Length: 264
	I1206 19:14:13.486767   86706 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1206 19:14:13.486814   86706 api_server.go:141] control plane version: v1.28.4
	I1206 19:14:13.486829   86706 api_server.go:131] duration metric: took 6.863727ms to wait for apiserver health ...
	I1206 19:14:13.486836   86706 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 19:14:13.661337   86706 request.go:629] Waited for 174.386956ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods
	I1206 19:14:13.661405   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods
	I1206 19:14:13.661413   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:13.661424   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:13.661437   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:13.666098   86706 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1206 19:14:13.666131   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:13.666140   86706 round_trippers.go:580]     Audit-Id: 2944f396-7af5-412c-b22e-51aa80ffd196
	I1206 19:14:13.666149   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:13.666157   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:13.666165   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:13.666171   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:13.666177   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:13 GMT
	I1206 19:14:13.667411   86706 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"848"},"items":[{"metadata":{"name":"coredns-5dd5756b68-h6rcq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"85247dde-4cee-482e-8f9b-a9e8f4e7172e","resourceVersion":"828","creationTimestamp":"2023-12-06T19:03:43Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d4bc00ef-7482-4e80-b416-7475ddc04c5d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4bc00ef-7482-4e80-b416-7475ddc04c5d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81870 chars]
	I1206 19:14:13.670021   86706 system_pods.go:59] 12 kube-system pods found
	I1206 19:14:13.670046   86706 system_pods.go:61] "coredns-5dd5756b68-h6rcq" [85247dde-4cee-482e-8f9b-a9e8f4e7172e] Running
	I1206 19:14:13.670051   86706 system_pods.go:61] "etcd-multinode-593099" [17573829-76f1-4718-80d6-248db178e8d0] Running
	I1206 19:14:13.670059   86706 system_pods.go:61] "kindnet-2s5b8" [da77f62f-091e-45f0-b6a6-0bc04b1c1f5d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1206 19:14:13.670067   86706 system_pods.go:61] "kindnet-mbkkj" [e67fa795-ace6-4463-b0be-493b26fec4e6] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1206 19:14:13.670074   86706 system_pods.go:61] "kindnet-x2r64" [1dafec99-c18b-40ca-8b9d-b5d520390c8c] Running
	I1206 19:14:13.670082   86706 system_pods.go:61] "kube-apiserver-multinode-593099" [c32eea84-5395-4ffd-9fe4-51ae29b0861c] Running
	I1206 19:14:13.670086   86706 system_pods.go:61] "kube-controller-manager-multinode-593099" [bd10545f-240d-418a-b4ca-a48c978a56c9] Running
	I1206 19:14:13.670091   86706 system_pods.go:61] "kube-proxy-ggxmb" [9967a10f-783d-4e8f-bb49-f609c7227307] Running
	I1206 19:14:13.670095   86706 system_pods.go:61] "kube-proxy-thqkt" [0012fda4-56e7-4054-ab90-1704569e66e8] Running
	I1206 19:14:13.670099   86706 system_pods.go:61] "kube-proxy-tp2wm" [366b51c9-af8f-4bd5-8200-dc43c4a3017c] Running
	I1206 19:14:13.670103   86706 system_pods.go:61] "kube-scheduler-multinode-593099" [7ae8a659-33ba-4e2b-9211-8d84efe7e5a4] Running
	I1206 19:14:13.670107   86706 system_pods.go:61] "storage-provisioner" [35974b37-5aff-4940-8e2d-5fec9d1e2166] Running
	I1206 19:14:13.670114   86706 system_pods.go:74] duration metric: took 183.271873ms to wait for pod list to return data ...
	I1206 19:14:13.670124   86706 default_sa.go:34] waiting for default service account to be created ...
	I1206 19:14:13.860482   86706 request.go:629] Waited for 190.28326ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.125:8443/api/v1/namespaces/default/serviceaccounts
	I1206 19:14:13.860578   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/default/serviceaccounts
	I1206 19:14:13.860589   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:13.860603   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:13.860618   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:13.863684   86706 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1206 19:14:13.863708   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:13.863717   86706 round_trippers.go:580]     Audit-Id: 185cd2f7-0dff-4e13-8564-ced5617de2a2
	I1206 19:14:13.863725   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:13.863733   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:13.863741   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:13.863754   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:13.863764   86706 round_trippers.go:580]     Content-Length: 261
	I1206 19:14:13.863780   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:13 GMT
	I1206 19:14:13.863814   86706 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"848"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"96af57ff-2c6a-48e3-9fcf-3f52ff53a1ea","resourceVersion":"298","creationTimestamp":"2023-12-06T19:03:42Z"}}]}
	I1206 19:14:13.864033   86706 default_sa.go:45] found service account: "default"
	I1206 19:14:13.864057   86706 default_sa.go:55] duration metric: took 193.92565ms for default service account to be created ...
	I1206 19:14:13.864068   86706 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 19:14:14.060500   86706 request.go:629] Waited for 196.345943ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods
	I1206 19:14:14.060560   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods
	I1206 19:14:14.060565   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:14.060573   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:14.060591   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:14.064674   86706 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1206 19:14:14.064701   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:14.064711   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:14 GMT
	I1206 19:14:14.064720   86706 round_trippers.go:580]     Audit-Id: 632ac1c8-30bc-4ff0-9c43-20813cacc941
	I1206 19:14:14.064727   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:14.064734   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:14.064742   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:14.064749   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:14.066635   86706 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"848"},"items":[{"metadata":{"name":"coredns-5dd5756b68-h6rcq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"85247dde-4cee-482e-8f9b-a9e8f4e7172e","resourceVersion":"828","creationTimestamp":"2023-12-06T19:03:43Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d4bc00ef-7482-4e80-b416-7475ddc04c5d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4bc00ef-7482-4e80-b416-7475ddc04c5d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81870 chars]
	I1206 19:14:14.069062   86706 system_pods.go:86] 12 kube-system pods found
	I1206 19:14:14.069088   86706 system_pods.go:89] "coredns-5dd5756b68-h6rcq" [85247dde-4cee-482e-8f9b-a9e8f4e7172e] Running
	I1206 19:14:14.069095   86706 system_pods.go:89] "etcd-multinode-593099" [17573829-76f1-4718-80d6-248db178e8d0] Running
	I1206 19:14:14.069106   86706 system_pods.go:89] "kindnet-2s5b8" [da77f62f-091e-45f0-b6a6-0bc04b1c1f5d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1206 19:14:14.069116   86706 system_pods.go:89] "kindnet-mbkkj" [e67fa795-ace6-4463-b0be-493b26fec4e6] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1206 19:14:14.069125   86706 system_pods.go:89] "kindnet-x2r64" [1dafec99-c18b-40ca-8b9d-b5d520390c8c] Running
	I1206 19:14:14.069137   86706 system_pods.go:89] "kube-apiserver-multinode-593099" [c32eea84-5395-4ffd-9fe4-51ae29b0861c] Running
	I1206 19:14:14.069145   86706 system_pods.go:89] "kube-controller-manager-multinode-593099" [bd10545f-240d-418a-b4ca-a48c978a56c9] Running
	I1206 19:14:14.069153   86706 system_pods.go:89] "kube-proxy-ggxmb" [9967a10f-783d-4e8f-bb49-f609c7227307] Running
	I1206 19:14:14.069161   86706 system_pods.go:89] "kube-proxy-thqkt" [0012fda4-56e7-4054-ab90-1704569e66e8] Running
	I1206 19:14:14.069168   86706 system_pods.go:89] "kube-proxy-tp2wm" [366b51c9-af8f-4bd5-8200-dc43c4a3017c] Running
	I1206 19:14:14.069177   86706 system_pods.go:89] "kube-scheduler-multinode-593099" [7ae8a659-33ba-4e2b-9211-8d84efe7e5a4] Running
	I1206 19:14:14.069188   86706 system_pods.go:89] "storage-provisioner" [35974b37-5aff-4940-8e2d-5fec9d1e2166] Running
	I1206 19:14:14.069197   86706 system_pods.go:126] duration metric: took 205.12223ms to wait for k8s-apps to be running ...
	I1206 19:14:14.069211   86706 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 19:14:14.069273   86706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 19:14:14.084200   86706 system_svc.go:56] duration metric: took 14.97907ms WaitForService to wait for kubelet.
	I1206 19:14:14.084232   86706 kubeadm.go:581] duration metric: took 12.994370293s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1206 19:14:14.084259   86706 node_conditions.go:102] verifying NodePressure condition ...
	I1206 19:14:14.260716   86706 request.go:629] Waited for 176.352815ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.125:8443/api/v1/nodes
	I1206 19:14:14.260784   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes
	I1206 19:14:14.260818   86706 round_trippers.go:469] Request Headers:
	I1206 19:14:14.260830   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:14:14.260844   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:14:14.263894   86706 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1206 19:14:14.263918   86706 round_trippers.go:577] Response Headers:
	I1206 19:14:14.263928   86706 round_trippers.go:580]     Audit-Id: d01dd8cc-96ef-4fb0-a91d-027b4b9d3431
	I1206 19:14:14.263935   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:14:14.263942   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:14:14.263949   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:14:14.263957   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:14:14.263967   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:14:14 GMT
	I1206 19:14:14.264287   86706 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"848"},"items":[{"metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"821","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 16178 chars]
	I1206 19:14:14.264932   86706 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1206 19:14:14.264954   86706 node_conditions.go:123] node cpu capacity is 2
	I1206 19:14:14.264965   86706 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1206 19:14:14.264969   86706 node_conditions.go:123] node cpu capacity is 2
	I1206 19:14:14.264976   86706 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1206 19:14:14.264979   86706 node_conditions.go:123] node cpu capacity is 2
	I1206 19:14:14.264985   86706 node_conditions.go:105] duration metric: took 180.721554ms to run NodePressure ...
	I1206 19:14:14.264997   86706 start.go:228] waiting for startup goroutines ...
	I1206 19:14:14.265006   86706 start.go:233] waiting for cluster config update ...
	I1206 19:14:14.265013   86706 start.go:242] writing updated cluster config ...
	I1206 19:14:14.265528   86706 config.go:182] Loaded profile config "multinode-593099": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 19:14:14.265666   86706 profile.go:148] Saving config to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/config.json ...
	I1206 19:14:14.268277   86706 out.go:177] * Starting worker node multinode-593099-m02 in cluster multinode-593099
	I1206 19:14:14.269669   86706 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1206 19:14:14.269695   86706 cache.go:56] Caching tarball of preloaded images
	I1206 19:14:14.269796   86706 preload.go:174] Found /home/jenkins/minikube-integration/17740-63652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1206 19:14:14.269807   86706 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1206 19:14:14.269893   86706 profile.go:148] Saving config to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/config.json ...
	I1206 19:14:14.270054   86706 start.go:365] acquiring machines lock for multinode-593099-m02: {Name:mk49ce640266d8c664a871ed4989f65c26b6fae1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1206 19:14:14.270095   86706 start.go:369] acquired machines lock for "multinode-593099-m02" in 22.581µs
	I1206 19:14:14.270111   86706 start.go:96] Skipping create...Using existing machine configuration
	I1206 19:14:14.270118   86706 fix.go:54] fixHost starting: m02
	I1206 19:14:14.270387   86706 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:14:14.270423   86706 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:14:14.284596   86706 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44483
	I1206 19:14:14.285059   86706 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:14:14.285526   86706 main.go:141] libmachine: Using API Version  1
	I1206 19:14:14.285549   86706 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:14:14.285920   86706 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:14:14.286128   86706 main.go:141] libmachine: (multinode-593099-m02) Calling .DriverName
	I1206 19:14:14.286274   86706 main.go:141] libmachine: (multinode-593099-m02) Calling .GetState
	I1206 19:14:14.287937   86706 fix.go:102] recreateIfNeeded on multinode-593099-m02: state=Running err=<nil>
	W1206 19:14:14.287956   86706 fix.go:128] unexpected machine state, will restart: <nil>
	I1206 19:14:14.289789   86706 out.go:177] * Updating the running kvm2 "multinode-593099-m02" VM ...
	I1206 19:14:14.291227   86706 machine.go:88] provisioning docker machine ...
	I1206 19:14:14.291244   86706 main.go:141] libmachine: (multinode-593099-m02) Calling .DriverName
	I1206 19:14:14.291457   86706 main.go:141] libmachine: (multinode-593099-m02) Calling .GetMachineName
	I1206 19:14:14.291613   86706 buildroot.go:166] provisioning hostname "multinode-593099-m02"
	I1206 19:14:14.291634   86706 main.go:141] libmachine: (multinode-593099-m02) Calling .GetMachineName
	I1206 19:14:14.291778   86706 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHHostname
	I1206 19:14:14.294214   86706 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:14:14.294628   86706 main.go:141] libmachine: (multinode-593099-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:67:33", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:04:08 +0000 UTC Type:0 Mac:52:54:00:49:67:33 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-593099-m02 Clientid:01:52:54:00:49:67:33}
	I1206 19:14:14.294655   86706 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:14:14.294761   86706 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHPort
	I1206 19:14:14.294935   86706 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHKeyPath
	I1206 19:14:14.295098   86706 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHKeyPath
	I1206 19:14:14.295238   86706 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHUsername
	I1206 19:14:14.295398   86706 main.go:141] libmachine: Using SSH client type: native
	I1206 19:14:14.295843   86706 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I1206 19:14:14.295863   86706 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-593099-m02 && echo "multinode-593099-m02" | sudo tee /etc/hostname
	I1206 19:14:14.428328   86706 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-593099-m02
	
	I1206 19:14:14.428358   86706 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHHostname
	I1206 19:14:14.430973   86706 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:14:14.431297   86706 main.go:141] libmachine: (multinode-593099-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:67:33", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:04:08 +0000 UTC Type:0 Mac:52:54:00:49:67:33 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-593099-m02 Clientid:01:52:54:00:49:67:33}
	I1206 19:14:14.431329   86706 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:14:14.431493   86706 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHPort
	I1206 19:14:14.431693   86706 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHKeyPath
	I1206 19:14:14.431844   86706 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHKeyPath
	I1206 19:14:14.432010   86706 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHUsername
	I1206 19:14:14.432188   86706 main.go:141] libmachine: Using SSH client type: native
	I1206 19:14:14.432499   86706 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I1206 19:14:14.432516   86706 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-593099-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-593099-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-593099-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 19:14:14.550343   86706 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1206 19:14:14.550382   86706 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17740-63652/.minikube CaCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17740-63652/.minikube}
	I1206 19:14:14.550403   86706 buildroot.go:174] setting up certificates
	I1206 19:14:14.550415   86706 provision.go:83] configureAuth start
	I1206 19:14:14.550431   86706 main.go:141] libmachine: (multinode-593099-m02) Calling .GetMachineName
	I1206 19:14:14.550757   86706 main.go:141] libmachine: (multinode-593099-m02) Calling .GetIP
	I1206 19:14:14.553468   86706 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:14:14.553903   86706 main.go:141] libmachine: (multinode-593099-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:67:33", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:04:08 +0000 UTC Type:0 Mac:52:54:00:49:67:33 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-593099-m02 Clientid:01:52:54:00:49:67:33}
	I1206 19:14:14.553932   86706 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:14:14.554111   86706 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHHostname
	I1206 19:14:14.556209   86706 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:14:14.556545   86706 main.go:141] libmachine: (multinode-593099-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:67:33", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:04:08 +0000 UTC Type:0 Mac:52:54:00:49:67:33 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-593099-m02 Clientid:01:52:54:00:49:67:33}
	I1206 19:14:14.556582   86706 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:14:14.556638   86706 provision.go:138] copyHostCerts
	I1206 19:14:14.556685   86706 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem
	I1206 19:14:14.556746   86706 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem, removing ...
	I1206 19:14:14.556760   86706 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem
	I1206 19:14:14.556856   86706 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem (1082 bytes)
	I1206 19:14:14.556958   86706 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem
	I1206 19:14:14.556989   86706 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem, removing ...
	I1206 19:14:14.556997   86706 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem
	I1206 19:14:14.557026   86706 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem (1123 bytes)
	I1206 19:14:14.557073   86706 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem
	I1206 19:14:14.557089   86706 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem, removing ...
	I1206 19:14:14.557095   86706 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem
	I1206 19:14:14.557115   86706 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem (1679 bytes)
	I1206 19:14:14.557159   86706 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem org=jenkins.multinode-593099-m02 san=[192.168.39.6 192.168.39.6 localhost 127.0.0.1 minikube multinode-593099-m02]
	I1206 19:14:14.776327   86706 provision.go:172] copyRemoteCerts
	I1206 19:14:14.776396   86706 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 19:14:14.776430   86706 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHHostname
	I1206 19:14:14.779343   86706 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:14:14.779798   86706 main.go:141] libmachine: (multinode-593099-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:67:33", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:04:08 +0000 UTC Type:0 Mac:52:54:00:49:67:33 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-593099-m02 Clientid:01:52:54:00:49:67:33}
	I1206 19:14:14.779834   86706 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:14:14.780032   86706 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHPort
	I1206 19:14:14.780241   86706 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHKeyPath
	I1206 19:14:14.780393   86706 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHUsername
	I1206 19:14:14.780524   86706 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/multinode-593099-m02/id_rsa Username:docker}
	I1206 19:14:14.866330   86706 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1206 19:14:14.866397   86706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1206 19:14:14.892196   86706 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1206 19:14:14.892258   86706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 19:14:14.916845   86706 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1206 19:14:14.916938   86706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 19:14:14.940401   86706 provision.go:86] duration metric: configureAuth took 389.969543ms
	I1206 19:14:14.940428   86706 buildroot.go:189] setting minikube options for container-runtime
	I1206 19:14:14.940653   86706 config.go:182] Loaded profile config "multinode-593099": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 19:14:14.940744   86706 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHHostname
	I1206 19:14:14.943379   86706 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:14:14.943868   86706 main.go:141] libmachine: (multinode-593099-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:67:33", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:04:08 +0000 UTC Type:0 Mac:52:54:00:49:67:33 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-593099-m02 Clientid:01:52:54:00:49:67:33}
	I1206 19:14:14.943900   86706 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:14:14.944147   86706 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHPort
	I1206 19:14:14.944381   86706 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHKeyPath
	I1206 19:14:14.944539   86706 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHKeyPath
	I1206 19:14:14.944731   86706 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHUsername
	I1206 19:14:14.944884   86706 main.go:141] libmachine: Using SSH client type: native
	I1206 19:14:14.945327   86706 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I1206 19:14:14.945349   86706 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 19:15:45.483316   86706 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 19:15:45.483355   86706 machine.go:91] provisioned docker machine in 1m31.192109014s
	I1206 19:15:45.483398   86706 start.go:300] post-start starting for "multinode-593099-m02" (driver="kvm2")
	I1206 19:15:45.483419   86706 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 19:15:45.483453   86706 main.go:141] libmachine: (multinode-593099-m02) Calling .DriverName
	I1206 19:15:45.483779   86706 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 19:15:45.483809   86706 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHHostname
	I1206 19:15:45.486870   86706 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:15:45.487269   86706 main.go:141] libmachine: (multinode-593099-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:67:33", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:04:08 +0000 UTC Type:0 Mac:52:54:00:49:67:33 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-593099-m02 Clientid:01:52:54:00:49:67:33}
	I1206 19:15:45.487309   86706 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:15:45.487477   86706 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHPort
	I1206 19:15:45.487681   86706 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHKeyPath
	I1206 19:15:45.487848   86706 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHUsername
	I1206 19:15:45.487989   86706 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/multinode-593099-m02/id_rsa Username:docker}
	I1206 19:15:45.576260   86706 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 19:15:45.580788   86706 command_runner.go:130] > NAME=Buildroot
	I1206 19:15:45.580818   86706 command_runner.go:130] > VERSION=2021.02.12-1-gf888a99-dirty
	I1206 19:15:45.580825   86706 command_runner.go:130] > ID=buildroot
	I1206 19:15:45.580833   86706 command_runner.go:130] > VERSION_ID=2021.02.12
	I1206 19:15:45.580840   86706 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1206 19:15:45.580909   86706 info.go:137] Remote host: Buildroot 2021.02.12
	I1206 19:15:45.580927   86706 filesync.go:126] Scanning /home/jenkins/minikube-integration/17740-63652/.minikube/addons for local assets ...
	I1206 19:15:45.581004   86706 filesync.go:126] Scanning /home/jenkins/minikube-integration/17740-63652/.minikube/files for local assets ...
	I1206 19:15:45.581105   86706 filesync.go:149] local asset: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem -> 708342.pem in /etc/ssl/certs
	I1206 19:15:45.581118   86706 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem -> /etc/ssl/certs/708342.pem
	I1206 19:15:45.581273   86706 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 19:15:45.592052   86706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem --> /etc/ssl/certs/708342.pem (1708 bytes)
	I1206 19:15:45.615328   86706 start.go:303] post-start completed in 131.903941ms
	I1206 19:15:45.615357   86706 fix.go:56] fixHost completed within 1m31.345238422s
	I1206 19:15:45.615382   86706 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHHostname
	I1206 19:15:45.618309   86706 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:15:45.618654   86706 main.go:141] libmachine: (multinode-593099-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:67:33", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:04:08 +0000 UTC Type:0 Mac:52:54:00:49:67:33 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-593099-m02 Clientid:01:52:54:00:49:67:33}
	I1206 19:15:45.618696   86706 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:15:45.618849   86706 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHPort
	I1206 19:15:45.619061   86706 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHKeyPath
	I1206 19:15:45.619245   86706 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHKeyPath
	I1206 19:15:45.619379   86706 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHUsername
	I1206 19:15:45.619550   86706 main.go:141] libmachine: Using SSH client type: native
	I1206 19:15:45.619923   86706 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I1206 19:15:45.619937   86706 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1206 19:15:45.742141   86706 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701890145.733854222
	
	I1206 19:15:45.742167   86706 fix.go:206] guest clock: 1701890145.733854222
	I1206 19:15:45.742177   86706 fix.go:219] Guest: 2023-12-06 19:15:45.733854222 +0000 UTC Remote: 2023-12-06 19:15:45.615361437 +0000 UTC m=+454.160713185 (delta=118.492785ms)
	I1206 19:15:45.742197   86706 fix.go:190] guest clock delta is within tolerance: 118.492785ms
	I1206 19:15:45.742202   86706 start.go:83] releasing machines lock for "multinode-593099-m02", held for 1m31.472097837s
	I1206 19:15:45.742223   86706 main.go:141] libmachine: (multinode-593099-m02) Calling .DriverName
	I1206 19:15:45.742510   86706 main.go:141] libmachine: (multinode-593099-m02) Calling .GetIP
	I1206 19:15:45.745060   86706 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:15:45.745426   86706 main.go:141] libmachine: (multinode-593099-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:67:33", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:04:08 +0000 UTC Type:0 Mac:52:54:00:49:67:33 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-593099-m02 Clientid:01:52:54:00:49:67:33}
	I1206 19:15:45.745456   86706 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:15:45.747463   86706 out.go:177] * Found network options:
	I1206 19:15:45.749048   86706 out.go:177]   - NO_PROXY=192.168.39.125
	W1206 19:15:45.750406   86706 proxy.go:119] fail to check proxy env: Error ip not in block
	I1206 19:15:45.750452   86706 main.go:141] libmachine: (multinode-593099-m02) Calling .DriverName
	I1206 19:15:45.751034   86706 main.go:141] libmachine: (multinode-593099-m02) Calling .DriverName
	I1206 19:15:45.751222   86706 main.go:141] libmachine: (multinode-593099-m02) Calling .DriverName
	I1206 19:15:45.751342   86706 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 19:15:45.751400   86706 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHHostname
	W1206 19:15:45.751429   86706 proxy.go:119] fail to check proxy env: Error ip not in block
	I1206 19:15:45.751535   86706 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 19:15:45.751561   86706 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHHostname
	I1206 19:15:45.754165   86706 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:15:45.754429   86706 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:15:45.754610   86706 main.go:141] libmachine: (multinode-593099-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:67:33", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:04:08 +0000 UTC Type:0 Mac:52:54:00:49:67:33 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-593099-m02 Clientid:01:52:54:00:49:67:33}
	I1206 19:15:45.754639   86706 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:15:45.754782   86706 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHPort
	I1206 19:15:45.754864   86706 main.go:141] libmachine: (multinode-593099-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:67:33", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:04:08 +0000 UTC Type:0 Mac:52:54:00:49:67:33 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-593099-m02 Clientid:01:52:54:00:49:67:33}
	I1206 19:15:45.754890   86706 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:15:45.754990   86706 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHKeyPath
	I1206 19:15:45.755058   86706 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHPort
	I1206 19:15:45.755132   86706 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHUsername
	I1206 19:15:45.755210   86706 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHKeyPath
	I1206 19:15:45.755320   86706 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/multinode-593099-m02/id_rsa Username:docker}
	I1206 19:15:45.755363   86706 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHUsername
	I1206 19:15:45.755469   86706 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/multinode-593099-m02/id_rsa Username:docker}
	I1206 19:15:45.988938   86706 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1206 19:15:45.988979   86706 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1206 19:15:45.995064   86706 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1206 19:15:45.995104   86706 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 19:15:45.995157   86706 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 19:15:46.003971   86706 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 19:15:46.003995   86706 start.go:475] detecting cgroup driver to use...
	I1206 19:15:46.004062   86706 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 19:15:46.017915   86706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 19:15:46.030205   86706 docker.go:203] disabling cri-docker service (if available) ...
	I1206 19:15:46.030305   86706 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 19:15:46.043447   86706 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 19:15:46.056581   86706 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 19:15:46.185080   86706 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 19:15:46.304306   86706 docker.go:219] disabling docker service ...
	I1206 19:15:46.304372   86706 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 19:15:46.319380   86706 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 19:15:46.332211   86706 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 19:15:46.453445   86706 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 19:15:46.574189   86706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 19:15:46.586788   86706 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 19:15:46.604750   86706 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1206 19:15:46.604800   86706 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1206 19:15:46.604858   86706 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:15:46.615163   86706 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 19:15:46.615231   86706 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:15:46.631062   86706 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:15:46.643431   86706 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:15:46.657760   86706 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 19:15:46.667284   86706 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 19:15:46.675570   86706 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1206 19:15:46.675649   86706 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 19:15:46.683952   86706 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 19:15:46.799138   86706 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 19:15:47.040266   86706 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 19:15:47.040356   86706 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 19:15:47.045367   86706 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1206 19:15:47.045387   86706 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1206 19:15:47.045394   86706 command_runner.go:130] > Device: 16h/22d	Inode: 1282        Links: 1
	I1206 19:15:47.045401   86706 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1206 19:15:47.045406   86706 command_runner.go:130] > Access: 2023-12-06 19:15:46.958950318 +0000
	I1206 19:15:47.045416   86706 command_runner.go:130] > Modify: 2023-12-06 19:15:46.958950318 +0000
	I1206 19:15:47.045421   86706 command_runner.go:130] > Change: 2023-12-06 19:15:46.958950318 +0000
	I1206 19:15:47.045424   86706 command_runner.go:130] >  Birth: -
	I1206 19:15:47.045676   86706 start.go:543] Will wait 60s for crictl version
	I1206 19:15:47.045723   86706 ssh_runner.go:195] Run: which crictl
	I1206 19:15:47.049688   86706 command_runner.go:130] > /usr/bin/crictl
	I1206 19:15:47.049744   86706 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1206 19:15:47.096092   86706 command_runner.go:130] > Version:  0.1.0
	I1206 19:15:47.096119   86706 command_runner.go:130] > RuntimeName:  cri-o
	I1206 19:15:47.096126   86706 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1206 19:15:47.096132   86706 command_runner.go:130] > RuntimeApiVersion:  v1
	I1206 19:15:47.096149   86706 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1206 19:15:47.096207   86706 ssh_runner.go:195] Run: crio --version
	I1206 19:15:47.144529   86706 command_runner.go:130] > crio version 1.24.1
	I1206 19:15:47.144558   86706 command_runner.go:130] > Version:          1.24.1
	I1206 19:15:47.144565   86706 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1206 19:15:47.144569   86706 command_runner.go:130] > GitTreeState:     dirty
	I1206 19:15:47.144575   86706 command_runner.go:130] > BuildDate:        2023-12-01T05:08:03Z
	I1206 19:15:47.144580   86706 command_runner.go:130] > GoVersion:        go1.19.9
	I1206 19:15:47.144584   86706 command_runner.go:130] > Compiler:         gc
	I1206 19:15:47.144589   86706 command_runner.go:130] > Platform:         linux/amd64
	I1206 19:15:47.144594   86706 command_runner.go:130] > Linkmode:         dynamic
	I1206 19:15:47.144602   86706 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1206 19:15:47.144606   86706 command_runner.go:130] > SeccompEnabled:   true
	I1206 19:15:47.144610   86706 command_runner.go:130] > AppArmorEnabled:  false
	I1206 19:15:47.147105   86706 ssh_runner.go:195] Run: crio --version
	I1206 19:15:47.199920   86706 command_runner.go:130] > crio version 1.24.1
	I1206 19:15:47.199942   86706 command_runner.go:130] > Version:          1.24.1
	I1206 19:15:47.199948   86706 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1206 19:15:47.199958   86706 command_runner.go:130] > GitTreeState:     dirty
	I1206 19:15:47.199965   86706 command_runner.go:130] > BuildDate:        2023-12-01T05:08:03Z
	I1206 19:15:47.199970   86706 command_runner.go:130] > GoVersion:        go1.19.9
	I1206 19:15:47.199974   86706 command_runner.go:130] > Compiler:         gc
	I1206 19:15:47.199978   86706 command_runner.go:130] > Platform:         linux/amd64
	I1206 19:15:47.199983   86706 command_runner.go:130] > Linkmode:         dynamic
	I1206 19:15:47.199990   86706 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1206 19:15:47.199994   86706 command_runner.go:130] > SeccompEnabled:   true
	I1206 19:15:47.199999   86706 command_runner.go:130] > AppArmorEnabled:  false
	I1206 19:15:47.202122   86706 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1206 19:15:47.203767   86706 out.go:177]   - env NO_PROXY=192.168.39.125
	I1206 19:15:47.205271   86706 main.go:141] libmachine: (multinode-593099-m02) Calling .GetIP
	I1206 19:15:47.207823   86706 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:15:47.208166   86706 main.go:141] libmachine: (multinode-593099-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:67:33", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:04:08 +0000 UTC Type:0 Mac:52:54:00:49:67:33 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-593099-m02 Clientid:01:52:54:00:49:67:33}
	I1206 19:15:47.208198   86706 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:15:47.208398   86706 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1206 19:15:47.212992   86706 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1206 19:15:47.213057   86706 certs.go:56] Setting up /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099 for IP: 192.168.39.6
	I1206 19:15:47.213079   86706 certs.go:190] acquiring lock for shared ca certs: {Name:mkf8fbf7b590617ef4dc6c3a4acb742ae26f89ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 19:15:47.213207   86706 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.key
	I1206 19:15:47.213272   86706 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.key
	I1206 19:15:47.213296   86706 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1206 19:15:47.213318   86706 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1206 19:15:47.213337   86706 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1206 19:15:47.213356   86706 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1206 19:15:47.213411   86706 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834.pem (1338 bytes)
	W1206 19:15:47.213454   86706 certs.go:433] ignoring /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834_empty.pem, impossibly tiny 0 bytes
	I1206 19:15:47.213461   86706 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 19:15:47.213482   86706 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem (1082 bytes)
	I1206 19:15:47.213505   86706 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem (1123 bytes)
	I1206 19:15:47.213531   86706 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem (1679 bytes)
	I1206 19:15:47.213572   86706 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem (1708 bytes)
	I1206 19:15:47.213600   86706 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem -> /usr/share/ca-certificates/708342.pem
	I1206 19:15:47.213613   86706 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:15:47.213625   86706 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834.pem -> /usr/share/ca-certificates/70834.pem
	I1206 19:15:47.213973   86706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 19:15:47.239780   86706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 19:15:47.267381   86706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 19:15:47.296208   86706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 19:15:47.323199   86706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem --> /usr/share/ca-certificates/708342.pem (1708 bytes)
	I1206 19:15:47.346961   86706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 19:15:47.371296   86706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834.pem --> /usr/share/ca-certificates/70834.pem (1338 bytes)
	I1206 19:15:47.396419   86706 ssh_runner.go:195] Run: openssl version
	I1206 19:15:47.402494   86706 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1206 19:15:47.402571   86706 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/708342.pem && ln -fs /usr/share/ca-certificates/708342.pem /etc/ssl/certs/708342.pem"
	I1206 19:15:47.413119   86706 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/708342.pem
	I1206 19:15:47.418085   86706 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  6 18:50 /usr/share/ca-certificates/708342.pem
	I1206 19:15:47.418361   86706 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  6 18:50 /usr/share/ca-certificates/708342.pem
	I1206 19:15:47.418432   86706 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/708342.pem
	I1206 19:15:47.424383   86706 command_runner.go:130] > 3ec20f2e
	I1206 19:15:47.424485   86706 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/708342.pem /etc/ssl/certs/3ec20f2e.0"
	I1206 19:15:47.433149   86706 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1206 19:15:47.443422   86706 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:15:47.448390   86706 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  6 18:41 /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:15:47.448554   86706 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  6 18:41 /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:15:47.448621   86706 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:15:47.454159   86706 command_runner.go:130] > b5213941
	I1206 19:15:47.454411   86706 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1206 19:15:47.462808   86706 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/70834.pem && ln -fs /usr/share/ca-certificates/70834.pem /etc/ssl/certs/70834.pem"
	I1206 19:15:47.472773   86706 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/70834.pem
	I1206 19:15:47.477366   86706 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  6 18:50 /usr/share/ca-certificates/70834.pem
	I1206 19:15:47.477603   86706 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  6 18:50 /usr/share/ca-certificates/70834.pem
	I1206 19:15:47.477660   86706 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/70834.pem
	I1206 19:15:47.483083   86706 command_runner.go:130] > 51391683
	I1206 19:15:47.483310   86706 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/70834.pem /etc/ssl/certs/51391683.0"
	I1206 19:15:47.491688   86706 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1206 19:15:47.496384   86706 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1206 19:15:47.496415   86706 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1206 19:15:47.496503   86706 ssh_runner.go:195] Run: crio config
	I1206 19:15:47.550499   86706 command_runner.go:130] ! time="2023-12-06 19:15:47.542197619Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1206 19:15:47.550535   86706 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1206 19:15:47.559750   86706 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1206 19:15:47.559778   86706 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1206 19:15:47.559786   86706 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1206 19:15:47.559791   86706 command_runner.go:130] > #
	I1206 19:15:47.559802   86706 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1206 19:15:47.559813   86706 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1206 19:15:47.559823   86706 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1206 19:15:47.559835   86706 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1206 19:15:47.559849   86706 command_runner.go:130] > # reload'.
	I1206 19:15:47.559865   86706 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1206 19:15:47.559879   86706 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1206 19:15:47.559893   86706 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1206 19:15:47.559907   86706 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1206 19:15:47.559916   86706 command_runner.go:130] > [crio]
	I1206 19:15:47.559927   86706 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1206 19:15:47.559938   86706 command_runner.go:130] > # containers images, in this directory.
	I1206 19:15:47.559950   86706 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1206 19:15:47.559969   86706 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1206 19:15:47.559981   86706 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1206 19:15:47.559995   86706 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1206 19:15:47.560009   86706 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1206 19:15:47.560019   86706 command_runner.go:130] > storage_driver = "overlay"
	I1206 19:15:47.560033   86706 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1206 19:15:47.560047   86706 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1206 19:15:47.560058   86706 command_runner.go:130] > storage_option = [
	I1206 19:15:47.560068   86706 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1206 19:15:47.560077   86706 command_runner.go:130] > ]
	I1206 19:15:47.560088   86706 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1206 19:15:47.560102   86706 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1206 19:15:47.560113   86706 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1206 19:15:47.560124   86706 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1206 19:15:47.560138   86706 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1206 19:15:47.560149   86706 command_runner.go:130] > # always happen on a node reboot
	I1206 19:15:47.560159   86706 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1206 19:15:47.560172   86706 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1206 19:15:47.560186   86706 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1206 19:15:47.560202   86706 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1206 19:15:47.560214   86706 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1206 19:15:47.560230   86706 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1206 19:15:47.560248   86706 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1206 19:15:47.560259   86706 command_runner.go:130] > # internal_wipe = true
	I1206 19:15:47.560270   86706 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1206 19:15:47.560284   86706 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1206 19:15:47.560304   86706 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1206 19:15:47.560315   86706 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1206 19:15:47.560327   86706 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1206 19:15:47.560336   86706 command_runner.go:130] > [crio.api]
	I1206 19:15:47.560348   86706 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1206 19:15:47.560359   86706 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1206 19:15:47.560372   86706 command_runner.go:130] > # IP address on which the stream server will listen.
	I1206 19:15:47.560380   86706 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1206 19:15:47.560395   86706 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1206 19:15:47.560408   86706 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1206 19:15:47.560418   86706 command_runner.go:130] > # stream_port = "0"
	I1206 19:15:47.560431   86706 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1206 19:15:47.560442   86706 command_runner.go:130] > # stream_enable_tls = false
	I1206 19:15:47.560456   86706 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1206 19:15:47.560466   86706 command_runner.go:130] > # stream_idle_timeout = ""
	I1206 19:15:47.560477   86706 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1206 19:15:47.560492   86706 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1206 19:15:47.560502   86706 command_runner.go:130] > # minutes.
	I1206 19:15:47.560511   86706 command_runner.go:130] > # stream_tls_cert = ""
	I1206 19:15:47.560525   86706 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1206 19:15:47.560540   86706 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1206 19:15:47.560550   86706 command_runner.go:130] > # stream_tls_key = ""
	I1206 19:15:47.560561   86706 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1206 19:15:47.560575   86706 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1206 19:15:47.560587   86706 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1206 19:15:47.560597   86706 command_runner.go:130] > # stream_tls_ca = ""
	I1206 19:15:47.560611   86706 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1206 19:15:47.560622   86706 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1206 19:15:47.560636   86706 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1206 19:15:47.560646   86706 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1206 19:15:47.560668   86706 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1206 19:15:47.560681   86706 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1206 19:15:47.560687   86706 command_runner.go:130] > [crio.runtime]
	I1206 19:15:47.560698   86706 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1206 19:15:47.560713   86706 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1206 19:15:47.560724   86706 command_runner.go:130] > # "nofile=1024:2048"
	I1206 19:15:47.560738   86706 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1206 19:15:47.560747   86706 command_runner.go:130] > # default_ulimits = [
	I1206 19:15:47.560756   86706 command_runner.go:130] > # ]
	I1206 19:15:47.560781   86706 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1206 19:15:47.560791   86706 command_runner.go:130] > # no_pivot = false
	I1206 19:15:47.560801   86706 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1206 19:15:47.560816   86706 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1206 19:15:47.560828   86706 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1206 19:15:47.560838   86706 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1206 19:15:47.560851   86706 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1206 19:15:47.560866   86706 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1206 19:15:47.560877   86706 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1206 19:15:47.560888   86706 command_runner.go:130] > # Cgroup setting for conmon
	I1206 19:15:47.560903   86706 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1206 19:15:47.560914   86706 command_runner.go:130] > conmon_cgroup = "pod"
	I1206 19:15:47.560925   86706 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1206 19:15:47.560935   86706 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1206 19:15:47.560951   86706 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1206 19:15:47.560961   86706 command_runner.go:130] > conmon_env = [
	I1206 19:15:47.560975   86706 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1206 19:15:47.560984   86706 command_runner.go:130] > ]
	I1206 19:15:47.560994   86706 command_runner.go:130] > # Additional environment variables to set for all the
	I1206 19:15:47.561006   86706 command_runner.go:130] > # containers. These are overridden if set in the
	I1206 19:15:47.561017   86706 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1206 19:15:47.561028   86706 command_runner.go:130] > # default_env = [
	I1206 19:15:47.561037   86706 command_runner.go:130] > # ]
	I1206 19:15:47.561048   86706 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1206 19:15:47.561058   86706 command_runner.go:130] > # selinux = false
	I1206 19:15:47.561073   86706 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1206 19:15:47.561086   86706 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1206 19:15:47.561100   86706 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1206 19:15:47.561111   86706 command_runner.go:130] > # seccomp_profile = ""
	I1206 19:15:47.561125   86706 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1206 19:15:47.561138   86706 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1206 19:15:47.561154   86706 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1206 19:15:47.561165   86706 command_runner.go:130] > # which might increase security.
	I1206 19:15:47.561174   86706 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1206 19:15:47.561188   86706 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1206 19:15:47.561202   86706 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1206 19:15:47.561217   86706 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1206 19:15:47.561247   86706 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1206 19:15:47.561260   86706 command_runner.go:130] > # This option supports live configuration reload.
	I1206 19:15:47.561268   86706 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1206 19:15:47.561282   86706 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1206 19:15:47.561298   86706 command_runner.go:130] > # the cgroup blockio controller.
	I1206 19:15:47.561309   86706 command_runner.go:130] > # blockio_config_file = ""
	I1206 19:15:47.561326   86706 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1206 19:15:47.561337   86706 command_runner.go:130] > # irqbalance daemon.
	I1206 19:15:47.561347   86706 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1206 19:15:47.561361   86706 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1206 19:15:47.561370   86706 command_runner.go:130] > # This option supports live configuration reload.
	I1206 19:15:47.561381   86706 command_runner.go:130] > # rdt_config_file = ""
	I1206 19:15:47.561394   86706 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1206 19:15:47.561405   86706 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1206 19:15:47.561417   86706 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1206 19:15:47.561428   86706 command_runner.go:130] > # separate_pull_cgroup = ""
	I1206 19:15:47.561443   86706 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1206 19:15:47.561457   86706 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1206 19:15:47.561467   86706 command_runner.go:130] > # will be added.
	I1206 19:15:47.561477   86706 command_runner.go:130] > # default_capabilities = [
	I1206 19:15:47.561484   86706 command_runner.go:130] > # 	"CHOWN",
	I1206 19:15:47.561495   86706 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1206 19:15:47.561505   86706 command_runner.go:130] > # 	"FSETID",
	I1206 19:15:47.561512   86706 command_runner.go:130] > # 	"FOWNER",
	I1206 19:15:47.561522   86706 command_runner.go:130] > # 	"SETGID",
	I1206 19:15:47.561533   86706 command_runner.go:130] > # 	"SETUID",
	I1206 19:15:47.561543   86706 command_runner.go:130] > # 	"SETPCAP",
	I1206 19:15:47.561551   86706 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1206 19:15:47.561561   86706 command_runner.go:130] > # 	"KILL",
	I1206 19:15:47.561569   86706 command_runner.go:130] > # ]
	I1206 19:15:47.561591   86706 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1206 19:15:47.561605   86706 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1206 19:15:47.561616   86706 command_runner.go:130] > # default_sysctls = [
	I1206 19:15:47.561623   86706 command_runner.go:130] > # ]
	I1206 19:15:47.561634   86706 command_runner.go:130] > # List of devices on the host that a
	I1206 19:15:47.561646   86706 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1206 19:15:47.561657   86706 command_runner.go:130] > # allowed_devices = [
	I1206 19:15:47.561667   86706 command_runner.go:130] > # 	"/dev/fuse",
	I1206 19:15:47.561675   86706 command_runner.go:130] > # ]
	I1206 19:15:47.561685   86706 command_runner.go:130] > # List of additional devices. specified as
	I1206 19:15:47.561701   86706 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1206 19:15:47.561714   86706 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1206 19:15:47.561752   86706 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1206 19:15:47.561764   86706 command_runner.go:130] > # additional_devices = [
	I1206 19:15:47.561770   86706 command_runner.go:130] > # ]
	I1206 19:15:47.561779   86706 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1206 19:15:47.561789   86706 command_runner.go:130] > # cdi_spec_dirs = [
	I1206 19:15:47.561797   86706 command_runner.go:130] > # 	"/etc/cdi",
	I1206 19:15:47.561807   86706 command_runner.go:130] > # 	"/var/run/cdi",
	I1206 19:15:47.561813   86706 command_runner.go:130] > # ]
	I1206 19:15:47.561828   86706 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1206 19:15:47.561842   86706 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1206 19:15:47.561853   86706 command_runner.go:130] > # Defaults to false.
	I1206 19:15:47.561865   86706 command_runner.go:130] > # device_ownership_from_security_context = false
	I1206 19:15:47.561879   86706 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1206 19:15:47.561893   86706 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1206 19:15:47.561903   86706 command_runner.go:130] > # hooks_dir = [
	I1206 19:15:47.561914   86706 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1206 19:15:47.561921   86706 command_runner.go:130] > # ]
	I1206 19:15:47.561935   86706 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1206 19:15:47.561950   86706 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1206 19:15:47.561962   86706 command_runner.go:130] > # its default mounts from the following two files:
	I1206 19:15:47.561970   86706 command_runner.go:130] > #
	I1206 19:15:47.561981   86706 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1206 19:15:47.561996   86706 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1206 19:15:47.562009   86706 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1206 19:15:47.562020   86706 command_runner.go:130] > #
	I1206 19:15:47.562031   86706 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1206 19:15:47.562046   86706 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1206 19:15:47.562060   86706 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1206 19:15:47.562072   86706 command_runner.go:130] > #      only add mounts it finds in this file.
	I1206 19:15:47.562081   86706 command_runner.go:130] > #
	I1206 19:15:47.562089   86706 command_runner.go:130] > # default_mounts_file = ""
	I1206 19:15:47.562102   86706 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1206 19:15:47.562116   86706 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1206 19:15:47.562127   86706 command_runner.go:130] > pids_limit = 1024
	I1206 19:15:47.562139   86706 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1206 19:15:47.562152   86706 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1206 19:15:47.562165   86706 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1206 19:15:47.562182   86706 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1206 19:15:47.562193   86706 command_runner.go:130] > # log_size_max = -1
	I1206 19:15:47.562208   86706 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1206 19:15:47.562218   86706 command_runner.go:130] > # log_to_journald = false
	I1206 19:15:47.562232   86706 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1206 19:15:47.562244   86706 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1206 19:15:47.562253   86706 command_runner.go:130] > # Path to directory for container attach sockets.
	I1206 19:15:47.562265   86706 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1206 19:15:47.562278   86706 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1206 19:15:47.562294   86706 command_runner.go:130] > # bind_mount_prefix = ""
	I1206 19:15:47.562307   86706 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1206 19:15:47.562317   86706 command_runner.go:130] > # read_only = false
	I1206 19:15:47.562329   86706 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1206 19:15:47.562343   86706 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1206 19:15:47.562354   86706 command_runner.go:130] > # live configuration reload.
	I1206 19:15:47.562362   86706 command_runner.go:130] > # log_level = "info"
	I1206 19:15:47.562375   86706 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1206 19:15:47.562388   86706 command_runner.go:130] > # This option supports live configuration reload.
	I1206 19:15:47.562398   86706 command_runner.go:130] > # log_filter = ""
	I1206 19:15:47.562411   86706 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1206 19:15:47.562422   86706 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1206 19:15:47.562433   86706 command_runner.go:130] > # separated by comma.
	I1206 19:15:47.562443   86706 command_runner.go:130] > # uid_mappings = ""
	I1206 19:15:47.562457   86706 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1206 19:15:47.562472   86706 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1206 19:15:47.562484   86706 command_runner.go:130] > # separated by comma.
	I1206 19:15:47.562494   86706 command_runner.go:130] > # gid_mappings = ""
	I1206 19:15:47.562505   86706 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1206 19:15:47.562520   86706 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1206 19:15:47.562533   86706 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1206 19:15:47.562544   86706 command_runner.go:130] > # minimum_mappable_uid = -1
	I1206 19:15:47.562556   86706 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1206 19:15:47.562570   86706 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1206 19:15:47.562584   86706 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1206 19:15:47.562596   86706 command_runner.go:130] > # minimum_mappable_gid = -1
	I1206 19:15:47.562607   86706 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1206 19:15:47.562621   86706 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1206 19:15:47.562635   86706 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1206 19:15:47.562646   86706 command_runner.go:130] > # ctr_stop_timeout = 30
	I1206 19:15:47.562659   86706 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1206 19:15:47.562672   86706 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1206 19:15:47.562681   86706 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1206 19:15:47.562694   86706 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1206 19:15:47.562707   86706 command_runner.go:130] > drop_infra_ctr = false
	I1206 19:15:47.562721   86706 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1206 19:15:47.562735   86706 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1206 19:15:47.562750   86706 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1206 19:15:47.562760   86706 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1206 19:15:47.562771   86706 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1206 19:15:47.562783   86706 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1206 19:15:47.562793   86706 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1206 19:15:47.562806   86706 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1206 19:15:47.562817   86706 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1206 19:15:47.562832   86706 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1206 19:15:47.562846   86706 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1206 19:15:47.562860   86706 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1206 19:15:47.562870   86706 command_runner.go:130] > # default_runtime = "runc"
	I1206 19:15:47.562883   86706 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1206 19:15:47.562900   86706 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1206 19:15:47.562919   86706 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1206 19:15:47.562931   86706 command_runner.go:130] > # creation as a file is not desired either.
	I1206 19:15:47.562946   86706 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1206 19:15:47.562959   86706 command_runner.go:130] > # the hostname is being managed dynamically.
	I1206 19:15:47.562970   86706 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1206 19:15:47.562979   86706 command_runner.go:130] > # ]
	I1206 19:15:47.562991   86706 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1206 19:15:47.563005   86706 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1206 19:15:47.563019   86706 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1206 19:15:47.563033   86706 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1206 19:15:47.563042   86706 command_runner.go:130] > #
	I1206 19:15:47.563051   86706 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1206 19:15:47.563063   86706 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1206 19:15:47.563074   86706 command_runner.go:130] > #  runtime_type = "oci"
	I1206 19:15:47.563084   86706 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1206 19:15:47.563095   86706 command_runner.go:130] > #  privileged_without_host_devices = false
	I1206 19:15:47.563105   86706 command_runner.go:130] > #  allowed_annotations = []
	I1206 19:15:47.563112   86706 command_runner.go:130] > # Where:
	I1206 19:15:47.563125   86706 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1206 19:15:47.563139   86706 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1206 19:15:47.563153   86706 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1206 19:15:47.563167   86706 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1206 19:15:47.563178   86706 command_runner.go:130] > #   in $PATH.
	I1206 19:15:47.563191   86706 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1206 19:15:47.563200   86706 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1206 19:15:47.563214   86706 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1206 19:15:47.563224   86706 command_runner.go:130] > #   state.
	I1206 19:15:47.563238   86706 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1206 19:15:47.563252   86706 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1206 19:15:47.563266   86706 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1206 19:15:47.563278   86706 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1206 19:15:47.563293   86706 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1206 19:15:47.563308   86706 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1206 19:15:47.563320   86706 command_runner.go:130] > #   The currently recognized values are:
	I1206 19:15:47.563335   86706 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1206 19:15:47.563351   86706 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1206 19:15:47.563368   86706 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1206 19:15:47.563382   86706 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1206 19:15:47.563398   86706 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1206 19:15:47.563412   86706 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1206 19:15:47.563426   86706 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1206 19:15:47.563441   86706 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1206 19:15:47.563452   86706 command_runner.go:130] > #   should be moved to the container's cgroup
	I1206 19:15:47.563461   86706 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1206 19:15:47.563472   86706 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1206 19:15:47.563480   86706 command_runner.go:130] > runtime_type = "oci"
	I1206 19:15:47.563491   86706 command_runner.go:130] > runtime_root = "/run/runc"
	I1206 19:15:47.563502   86706 command_runner.go:130] > runtime_config_path = ""
	I1206 19:15:47.563510   86706 command_runner.go:130] > monitor_path = ""
	I1206 19:15:47.563521   86706 command_runner.go:130] > monitor_cgroup = ""
	I1206 19:15:47.563535   86706 command_runner.go:130] > monitor_exec_cgroup = ""
	I1206 19:15:47.563550   86706 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1206 19:15:47.563560   86706 command_runner.go:130] > # running containers
	I1206 19:15:47.563571   86706 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1206 19:15:47.563604   86706 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1206 19:15:47.563666   86706 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1206 19:15:47.563680   86706 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1206 19:15:47.563690   86706 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1206 19:15:47.563699   86706 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1206 19:15:47.563711   86706 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1206 19:15:47.563723   86706 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1206 19:15:47.563735   86706 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1206 19:15:47.563749   86706 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1206 19:15:47.563763   86706 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1206 19:15:47.563775   86706 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1206 19:15:47.563789   86706 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1206 19:15:47.563805   86706 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1206 19:15:47.563821   86706 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1206 19:15:47.563835   86706 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1206 19:15:47.563854   86706 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1206 19:15:47.563870   86706 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1206 19:15:47.563884   86706 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1206 19:15:47.563901   86706 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1206 19:15:47.563911   86706 command_runner.go:130] > # Example:
	I1206 19:15:47.563921   86706 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1206 19:15:47.563933   86706 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1206 19:15:47.563945   86706 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1206 19:15:47.563957   86706 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1206 19:15:47.563965   86706 command_runner.go:130] > # cpuset = 0
	I1206 19:15:47.563976   86706 command_runner.go:130] > # cpushares = "0-1"
	I1206 19:15:47.563983   86706 command_runner.go:130] > # Where:
	I1206 19:15:47.563993   86706 command_runner.go:130] > # The workload name is workload-type.
	I1206 19:15:47.564008   86706 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1206 19:15:47.564021   86706 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1206 19:15:47.564034   86706 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1206 19:15:47.564050   86706 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1206 19:15:47.564064   86706 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1206 19:15:47.564073   86706 command_runner.go:130] > # 
	I1206 19:15:47.564085   86706 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1206 19:15:47.564094   86706 command_runner.go:130] > #
	I1206 19:15:47.564109   86706 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1206 19:15:47.564125   86706 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1206 19:15:47.564138   86706 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1206 19:15:47.564151   86706 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1206 19:15:47.564165   86706 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1206 19:15:47.564174   86706 command_runner.go:130] > [crio.image]
	I1206 19:15:47.564185   86706 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1206 19:15:47.564195   86706 command_runner.go:130] > # default_transport = "docker://"
	I1206 19:15:47.564207   86706 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1206 19:15:47.564220   86706 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1206 19:15:47.564232   86706 command_runner.go:130] > # global_auth_file = ""
	I1206 19:15:47.564245   86706 command_runner.go:130] > # The image used to instantiate infra containers.
	I1206 19:15:47.564258   86706 command_runner.go:130] > # This option supports live configuration reload.
	I1206 19:15:47.564270   86706 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1206 19:15:47.564285   86706 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1206 19:15:47.564302   86706 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1206 19:15:47.564314   86706 command_runner.go:130] > # This option supports live configuration reload.
	I1206 19:15:47.564325   86706 command_runner.go:130] > # pause_image_auth_file = ""
	I1206 19:15:47.564341   86706 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1206 19:15:47.564355   86706 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1206 19:15:47.564369   86706 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1206 19:15:47.564383   86706 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1206 19:15:47.564392   86706 command_runner.go:130] > # pause_command = "/pause"
	I1206 19:15:47.564406   86706 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1206 19:15:47.564420   86706 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1206 19:15:47.564434   86706 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1206 19:15:47.564448   86706 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1206 19:15:47.564461   86706 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1206 19:15:47.564471   86706 command_runner.go:130] > # signature_policy = ""
	I1206 19:15:47.564482   86706 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1206 19:15:47.564496   86706 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1206 19:15:47.564507   86706 command_runner.go:130] > # changing them here.
	I1206 19:15:47.564517   86706 command_runner.go:130] > # insecure_registries = [
	I1206 19:15:47.564526   86706 command_runner.go:130] > # ]
	I1206 19:15:47.564546   86706 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1206 19:15:47.564558   86706 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1206 19:15:47.564566   86706 command_runner.go:130] > # image_volumes = "mkdir"
	I1206 19:15:47.564579   86706 command_runner.go:130] > # Temporary directory to use for storing big files
	I1206 19:15:47.564590   86706 command_runner.go:130] > # big_files_temporary_dir = ""
	I1206 19:15:47.564605   86706 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1206 19:15:47.564615   86706 command_runner.go:130] > # CNI plugins.
	I1206 19:15:47.564624   86706 command_runner.go:130] > [crio.network]
	I1206 19:15:47.564636   86706 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1206 19:15:47.564649   86706 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1206 19:15:47.564660   86706 command_runner.go:130] > # cni_default_network = ""
	I1206 19:15:47.564673   86706 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1206 19:15:47.564683   86706 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1206 19:15:47.564696   86706 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1206 19:15:47.564707   86706 command_runner.go:130] > # plugin_dirs = [
	I1206 19:15:47.564717   86706 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1206 19:15:47.564725   86706 command_runner.go:130] > # ]
	I1206 19:15:47.564739   86706 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1206 19:15:47.564749   86706 command_runner.go:130] > [crio.metrics]
	I1206 19:15:47.564759   86706 command_runner.go:130] > # Globally enable or disable metrics support.
	I1206 19:15:47.564771   86706 command_runner.go:130] > enable_metrics = true
	I1206 19:15:47.564783   86706 command_runner.go:130] > # Specify enabled metrics collectors.
	I1206 19:15:47.564794   86706 command_runner.go:130] > # Per default all metrics are enabled.
	I1206 19:15:47.564806   86706 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1206 19:15:47.564820   86706 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1206 19:15:47.564834   86706 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1206 19:15:47.564844   86706 command_runner.go:130] > # metrics_collectors = [
	I1206 19:15:47.564851   86706 command_runner.go:130] > # 	"operations",
	I1206 19:15:47.564863   86706 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1206 19:15:47.564875   86706 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1206 19:15:47.564885   86706 command_runner.go:130] > # 	"operations_errors",
	I1206 19:15:47.564897   86706 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1206 19:15:47.564905   86706 command_runner.go:130] > # 	"image_pulls_by_name",
	I1206 19:15:47.564916   86706 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1206 19:15:47.564927   86706 command_runner.go:130] > # 	"image_pulls_failures",
	I1206 19:15:47.564935   86706 command_runner.go:130] > # 	"image_pulls_successes",
	I1206 19:15:47.564945   86706 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1206 19:15:47.564955   86706 command_runner.go:130] > # 	"image_layer_reuse",
	I1206 19:15:47.564966   86706 command_runner.go:130] > # 	"containers_oom_total",
	I1206 19:15:47.564977   86706 command_runner.go:130] > # 	"containers_oom",
	I1206 19:15:47.564988   86706 command_runner.go:130] > # 	"processes_defunct",
	I1206 19:15:47.564998   86706 command_runner.go:130] > # 	"operations_total",
	I1206 19:15:47.565008   86706 command_runner.go:130] > # 	"operations_latency_seconds",
	I1206 19:15:47.565017   86706 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1206 19:15:47.565028   86706 command_runner.go:130] > # 	"operations_errors_total",
	I1206 19:15:47.565037   86706 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1206 19:15:47.565048   86706 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1206 19:15:47.565060   86706 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1206 19:15:47.565071   86706 command_runner.go:130] > # 	"image_pulls_success_total",
	I1206 19:15:47.565079   86706 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1206 19:15:47.565090   86706 command_runner.go:130] > # 	"containers_oom_count_total",
	I1206 19:15:47.565097   86706 command_runner.go:130] > # ]
	I1206 19:15:47.565110   86706 command_runner.go:130] > # The port on which the metrics server will listen.
	I1206 19:15:47.565120   86706 command_runner.go:130] > # metrics_port = 9090
	I1206 19:15:47.565133   86706 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1206 19:15:47.565143   86706 command_runner.go:130] > # metrics_socket = ""
	I1206 19:15:47.565158   86706 command_runner.go:130] > # The certificate for the secure metrics server.
	I1206 19:15:47.565172   86706 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1206 19:15:47.565185   86706 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1206 19:15:47.565197   86706 command_runner.go:130] > # certificate on any modification event.
	I1206 19:15:47.565209   86706 command_runner.go:130] > # metrics_cert = ""
	I1206 19:15:47.565222   86706 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1206 19:15:47.565247   86706 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1206 19:15:47.565255   86706 command_runner.go:130] > # metrics_key = ""
	I1206 19:15:47.565269   86706 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1206 19:15:47.565279   86706 command_runner.go:130] > [crio.tracing]
	I1206 19:15:47.565296   86706 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1206 19:15:47.565307   86706 command_runner.go:130] > # enable_tracing = false
	I1206 19:15:47.565317   86706 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1206 19:15:47.565329   86706 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1206 19:15:47.565341   86706 command_runner.go:130] > # Number of samples to collect per million spans.
	I1206 19:15:47.565350   86706 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1206 19:15:47.565364   86706 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1206 19:15:47.565373   86706 command_runner.go:130] > [crio.stats]
	I1206 19:15:47.565390   86706 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1206 19:15:47.565404   86706 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1206 19:15:47.565415   86706 command_runner.go:130] > # stats_collection_period = 0
	I1206 19:15:47.565506   86706 cni.go:84] Creating CNI manager for ""
	I1206 19:15:47.565518   86706 cni.go:136] 3 nodes found, recommending kindnet
	I1206 19:15:47.565529   86706 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1206 19:15:47.565556   86706 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.6 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-593099 NodeName:multinode-593099-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.125"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.6 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 19:15:47.565714   86706 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.6
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-593099-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.6
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.125"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 19:15:47.565793   86706 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-593099-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-593099 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1206 19:15:47.565864   86706 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1206 19:15:47.576393   86706 command_runner.go:130] > kubeadm
	I1206 19:15:47.576422   86706 command_runner.go:130] > kubectl
	I1206 19:15:47.576428   86706 command_runner.go:130] > kubelet
	I1206 19:15:47.576456   86706 binaries.go:44] Found k8s binaries, skipping transfer
	I1206 19:15:47.576521   86706 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1206 19:15:47.586052   86706 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1206 19:15:47.602756   86706 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 19:15:47.621702   86706 ssh_runner.go:195] Run: grep 192.168.39.125	control-plane.minikube.internal$ /etc/hosts
	I1206 19:15:47.625677   86706 command_runner.go:130] > 192.168.39.125	control-plane.minikube.internal
	I1206 19:15:47.625800   86706 host.go:66] Checking if "multinode-593099" exists ...
	I1206 19:15:47.626105   86706 config.go:182] Loaded profile config "multinode-593099": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 19:15:47.626162   86706 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:15:47.626210   86706 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:15:47.645632   86706 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43899
	I1206 19:15:47.646131   86706 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:15:47.646727   86706 main.go:141] libmachine: Using API Version  1
	I1206 19:15:47.646759   86706 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:15:47.647169   86706 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:15:47.647414   86706 main.go:141] libmachine: (multinode-593099) Calling .DriverName
	I1206 19:15:47.647593   86706 start.go:304] JoinCluster: &{Name:multinode-593099 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-593099 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.6 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.194 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false i
ngress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 19:15:47.647771   86706 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1206 19:15:47.647791   86706 main.go:141] libmachine: (multinode-593099) Calling .GetSSHHostname
	I1206 19:15:47.651062   86706 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:15:47.651591   86706 main.go:141] libmachine: (multinode-593099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:c6", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:13:22 +0000 UTC Type:0 Mac:52:54:00:37:16:c6 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:multinode-593099 Clientid:01:52:54:00:37:16:c6}
	I1206 19:15:47.651620   86706 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined IP address 192.168.39.125 and MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:15:47.651806   86706 main.go:141] libmachine: (multinode-593099) Calling .GetSSHPort
	I1206 19:15:47.651990   86706 main.go:141] libmachine: (multinode-593099) Calling .GetSSHKeyPath
	I1206 19:15:47.652200   86706 main.go:141] libmachine: (multinode-593099) Calling .GetSSHUsername
	I1206 19:15:47.652330   86706 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/multinode-593099/id_rsa Username:docker}
	I1206 19:15:47.857485   86706 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token alp6ok.pmimmssqi6znfka2 --discovery-token-ca-cert-hash sha256:3173d0205a58c67077c3594ee458dc14bc41fcece32682cbfd9ea1126e12b817 
	I1206 19:15:47.857557   86706 start.go:317] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.39.6 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1206 19:15:47.857608   86706 host.go:66] Checking if "multinode-593099" exists ...
	I1206 19:15:47.858086   86706 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:15:47.858150   86706 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:15:47.872569   86706 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36969
	I1206 19:15:47.873030   86706 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:15:47.873537   86706 main.go:141] libmachine: Using API Version  1
	I1206 19:15:47.873561   86706 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:15:47.873892   86706 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:15:47.874090   86706 main.go:141] libmachine: (multinode-593099) Calling .DriverName
	I1206 19:15:47.874358   86706 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-593099-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I1206 19:15:47.874388   86706 main.go:141] libmachine: (multinode-593099) Calling .GetSSHHostname
	I1206 19:15:47.877417   86706 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:15:47.877929   86706 main.go:141] libmachine: (multinode-593099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:c6", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:13:22 +0000 UTC Type:0 Mac:52:54:00:37:16:c6 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:multinode-593099 Clientid:01:52:54:00:37:16:c6}
	I1206 19:15:47.877962   86706 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined IP address 192.168.39.125 and MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:15:47.878069   86706 main.go:141] libmachine: (multinode-593099) Calling .GetSSHPort
	I1206 19:15:47.878274   86706 main.go:141] libmachine: (multinode-593099) Calling .GetSSHKeyPath
	I1206 19:15:47.878427   86706 main.go:141] libmachine: (multinode-593099) Calling .GetSSHUsername
	I1206 19:15:47.878574   86706 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/multinode-593099/id_rsa Username:docker}
	I1206 19:15:48.086631   86706 command_runner.go:130] > node/multinode-593099-m02 cordoned
	I1206 19:15:51.133782   86706 command_runner.go:130] > pod "busybox-5bc68d56bd-shdgj" has DeletionTimestamp older than 1 seconds, skipping
	I1206 19:15:51.133817   86706 command_runner.go:130] > node/multinode-593099-m02 drained
	I1206 19:15:51.135552   86706 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I1206 19:15:51.135574   86706 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-2s5b8, kube-system/kube-proxy-ggxmb
	I1206 19:15:51.135608   86706 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-593099-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.261217104s)
	I1206 19:15:51.135642   86706 node.go:108] successfully drained node "m02"
	I1206 19:15:51.136022   86706 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17740-63652/kubeconfig
	I1206 19:15:51.136231   86706 kapi.go:59] client config for multinode-593099: &rest.Config{Host:"https://192.168.39.125:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/client.crt", KeyFile:"/home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/client.key", CAFile:"/home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 19:15:51.136655   86706 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I1206 19:15:51.136708   86706 round_trippers.go:463] DELETE https://192.168.39.125:8443/api/v1/nodes/multinode-593099-m02
	I1206 19:15:51.136716   86706 round_trippers.go:469] Request Headers:
	I1206 19:15:51.136724   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:15:51.136730   86706 round_trippers.go:473]     Content-Type: application/json
	I1206 19:15:51.136738   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:15:51.153539   86706 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I1206 19:15:51.153569   86706 round_trippers.go:577] Response Headers:
	I1206 19:15:51.153578   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:15:51.153584   86706 round_trippers.go:580]     Content-Length: 171
	I1206 19:15:51.153589   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:15:51 GMT
	I1206 19:15:51.153595   86706 round_trippers.go:580]     Audit-Id: de15ab7d-9423-40cd-b395-1521a2ee506e
	I1206 19:15:51.153601   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:15:51.153606   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:15:51.153611   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:15:51.153641   86706 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-593099-m02","kind":"nodes","uid":"4f57a17b-3ee2-40b9-bc65-252760c4ac03"}}
	I1206 19:15:51.153678   86706 node.go:124] successfully deleted node "m02"
	I1206 19:15:51.153690   86706 start.go:321] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.39.6 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1206 19:15:51.153712   86706 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.6 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1206 19:15:51.153735   86706 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token alp6ok.pmimmssqi6znfka2 --discovery-token-ca-cert-hash sha256:3173d0205a58c67077c3594ee458dc14bc41fcece32682cbfd9ea1126e12b817 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-593099-m02"
	I1206 19:15:51.223353   86706 command_runner.go:130] > [preflight] Running pre-flight checks
	I1206 19:15:51.381613   86706 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1206 19:15:51.381641   86706 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1206 19:15:51.437271   86706 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 19:15:51.437305   86706 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 19:15:51.437314   86706 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1206 19:15:51.575529   86706 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1206 19:15:52.098197   86706 command_runner.go:130] > This node has joined the cluster:
	I1206 19:15:52.098230   86706 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1206 19:15:52.098239   86706 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1206 19:15:52.098248   86706 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1206 19:15:52.100683   86706 command_runner.go:130] ! W1206 19:15:51.214794    2681 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1206 19:15:52.100714   86706 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1206 19:15:52.100724   86706 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1206 19:15:52.100768   86706 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1206 19:15:52.100797   86706 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1206 19:15:52.383141   86706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=31a3600ce72029d920a55140bbc6d0705e357503 minikube.k8s.io/name=multinode-593099 minikube.k8s.io/updated_at=2023_12_06T19_15_52_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:15:52.497186   86706 command_runner.go:130] > node/multinode-593099-m02 labeled
	I1206 19:15:52.506682   86706 command_runner.go:130] > node/multinode-593099-m03 labeled
	I1206 19:15:52.508464   86706 start.go:306] JoinCluster complete in 4.860865641s
	I1206 19:15:52.508494   86706 cni.go:84] Creating CNI manager for ""
	I1206 19:15:52.508502   86706 cni.go:136] 3 nodes found, recommending kindnet
	I1206 19:15:52.508583   86706 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1206 19:15:52.516672   86706 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1206 19:15:52.516710   86706 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1206 19:15:52.516721   86706 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1206 19:15:52.516732   86706 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1206 19:15:52.516741   86706 command_runner.go:130] > Access: 2023-12-06 19:13:22.670512873 +0000
	I1206 19:15:52.516749   86706 command_runner.go:130] > Modify: 2023-12-01 05:15:19.000000000 +0000
	I1206 19:15:52.516762   86706 command_runner.go:130] > Change: 2023-12-06 19:13:20.668512873 +0000
	I1206 19:15:52.516768   86706 command_runner.go:130] >  Birth: -
	I1206 19:15:52.517019   86706 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1206 19:15:52.517035   86706 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1206 19:15:52.537738   86706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1206 19:15:52.910318   86706 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1206 19:15:52.914772   86706 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1206 19:15:52.918692   86706 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1206 19:15:52.931499   86706 command_runner.go:130] > daemonset.apps/kindnet configured
	I1206 19:15:52.934618   86706 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17740-63652/kubeconfig
	I1206 19:15:52.934847   86706 kapi.go:59] client config for multinode-593099: &rest.Config{Host:"https://192.168.39.125:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/client.crt", KeyFile:"/home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/client.key", CAFile:"/home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 19:15:52.935205   86706 round_trippers.go:463] GET https://192.168.39.125:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1206 19:15:52.935219   86706 round_trippers.go:469] Request Headers:
	I1206 19:15:52.935227   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:15:52.935233   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:15:52.937954   86706 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:15:52.937982   86706 round_trippers.go:577] Response Headers:
	I1206 19:15:52.937994   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:15:52 GMT
	I1206 19:15:52.938003   86706 round_trippers.go:580]     Audit-Id: 3f125832-0e05-4be5-95d1-449c11e28cd7
	I1206 19:15:52.938011   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:15:52.938020   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:15:52.938027   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:15:52.938035   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:15:52.938044   86706 round_trippers.go:580]     Content-Length: 291
	I1206 19:15:52.938076   86706 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"914591c0-c4d9-4bf1-b4d5-c7cbc3153364","resourceVersion":"841","creationTimestamp":"2023-12-06T19:03:30Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1206 19:15:52.938165   86706 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-593099" context rescaled to 1 replicas
	I1206 19:15:52.938194   86706 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.6 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1206 19:15:52.940044   86706 out.go:177] * Verifying Kubernetes components...
	I1206 19:15:52.941618   86706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 19:15:52.956597   86706 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17740-63652/kubeconfig
	I1206 19:15:52.956881   86706 kapi.go:59] client config for multinode-593099: &rest.Config{Host:"https://192.168.39.125:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/client.crt", KeyFile:"/home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/client.key", CAFile:"/home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 19:15:52.957162   86706 node_ready.go:35] waiting up to 6m0s for node "multinode-593099-m02" to be "Ready" ...
	I1206 19:15:52.957267   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099-m02
	I1206 19:15:52.957279   86706 round_trippers.go:469] Request Headers:
	I1206 19:15:52.957293   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:15:52.957303   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:15:52.960098   86706 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:15:52.960117   86706 round_trippers.go:577] Response Headers:
	I1206 19:15:52.960124   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:15:52.960130   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:15:52 GMT
	I1206 19:15:52.960135   86706 round_trippers.go:580]     Audit-Id: c555d785-e6e0-46fd-97b0-944097ed8e95
	I1206 19:15:52.960140   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:15:52.960145   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:15:52.960150   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:15:52.960568   86706 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099-m02","uid":"6ea06f34-1ede-44f1-9662-8cba0265fa0f","resourceVersion":"992","creationTimestamp":"2023-12-06T19:15:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_06T19_15_52_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:15:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3991 chars]
	I1206 19:15:52.960861   86706 node_ready.go:49] node "multinode-593099-m02" has status "Ready":"True"
	I1206 19:15:52.960875   86706 node_ready.go:38] duration metric: took 3.696167ms waiting for node "multinode-593099-m02" to be "Ready" ...
	I1206 19:15:52.960883   86706 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 19:15:52.960945   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods
	I1206 19:15:52.960953   86706 round_trippers.go:469] Request Headers:
	I1206 19:15:52.960960   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:15:52.960968   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:15:52.964653   86706 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1206 19:15:52.964672   86706 round_trippers.go:577] Response Headers:
	I1206 19:15:52.964678   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:15:52 GMT
	I1206 19:15:52.964683   86706 round_trippers.go:580]     Audit-Id: 3af45bb3-c345-4f2e-a360-4401fd53be24
	I1206 19:15:52.964688   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:15:52.964693   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:15:52.964698   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:15:52.964703   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:15:52.966333   86706 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"999"},"items":[{"metadata":{"name":"coredns-5dd5756b68-h6rcq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"85247dde-4cee-482e-8f9b-a9e8f4e7172e","resourceVersion":"828","creationTimestamp":"2023-12-06T19:03:43Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d4bc00ef-7482-4e80-b416-7475ddc04c5d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4bc00ef-7482-4e80-b416-7475ddc04c5d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82227 chars]
	I1206 19:15:52.968729   86706 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-h6rcq" in "kube-system" namespace to be "Ready" ...
	I1206 19:15:52.968820   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h6rcq
	I1206 19:15:52.968830   86706 round_trippers.go:469] Request Headers:
	I1206 19:15:52.968837   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:15:52.968844   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:15:52.971331   86706 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:15:52.971350   86706 round_trippers.go:577] Response Headers:
	I1206 19:15:52.971359   86706 round_trippers.go:580]     Audit-Id: 11164dfd-1c03-4983-b84d-02f14047942c
	I1206 19:15:52.971366   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:15:52.971373   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:15:52.971380   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:15:52.971392   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:15:52.971399   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:15:52 GMT
	I1206 19:15:52.971559   86706 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-h6rcq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"85247dde-4cee-482e-8f9b-a9e8f4e7172e","resourceVersion":"828","creationTimestamp":"2023-12-06T19:03:43Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d4bc00ef-7482-4e80-b416-7475ddc04c5d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4bc00ef-7482-4e80-b416-7475ddc04c5d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I1206 19:15:52.972037   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:15:52.972057   86706 round_trippers.go:469] Request Headers:
	I1206 19:15:52.972065   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:15:52.972071   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:15:52.974393   86706 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:15:52.974417   86706 round_trippers.go:577] Response Headers:
	I1206 19:15:52.974428   86706 round_trippers.go:580]     Audit-Id: 4101cc65-3c18-4887-a8f0-bf66ece61aa0
	I1206 19:15:52.974436   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:15:52.974448   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:15:52.974458   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:15:52.974469   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:15:52.974480   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:15:52 GMT
	I1206 19:15:52.974608   86706 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"857","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1206 19:15:52.974917   86706 pod_ready.go:92] pod "coredns-5dd5756b68-h6rcq" in "kube-system" namespace has status "Ready":"True"
	I1206 19:15:52.974931   86706 pod_ready.go:81] duration metric: took 6.181548ms waiting for pod "coredns-5dd5756b68-h6rcq" in "kube-system" namespace to be "Ready" ...
	I1206 19:15:52.974940   86706 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-593099" in "kube-system" namespace to be "Ready" ...
	I1206 19:15:52.974992   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-593099
	I1206 19:15:52.974998   86706 round_trippers.go:469] Request Headers:
	I1206 19:15:52.975005   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:15:52.975013   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:15:52.977930   86706 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:15:52.977954   86706 round_trippers.go:577] Response Headers:
	I1206 19:15:52.977964   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:15:52.977973   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:15:52.977980   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:15:52.977988   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:15:52 GMT
	I1206 19:15:52.977996   86706 round_trippers.go:580]     Audit-Id: e2f88c71-1c59-4501-991e-75babd5d7b77
	I1206 19:15:52.978005   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:15:52.978253   86706 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-593099","namespace":"kube-system","uid":"17573829-76f1-4718-80d6-248db178e8d0","resourceVersion":"848","creationTimestamp":"2023-12-06T19:03:29Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.125:2379","kubernetes.io/config.hash":"9ce14df981100c86a2ade94d91a33196","kubernetes.io/config.mirror":"9ce14df981100c86a2ade94d91a33196","kubernetes.io/config.seen":"2023-12-06T19:03:21.456077539Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I1206 19:15:52.978748   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:15:52.978780   86706 round_trippers.go:469] Request Headers:
	I1206 19:15:52.978796   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:15:52.978805   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:15:52.981478   86706 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:15:52.981498   86706 round_trippers.go:577] Response Headers:
	I1206 19:15:52.981507   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:15:52 GMT
	I1206 19:15:52.981516   86706 round_trippers.go:580]     Audit-Id: 5f2dfc4e-9f34-4bae-afed-5fa148e4ff3b
	I1206 19:15:52.981529   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:15:52.981540   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:15:52.981547   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:15:52.981557   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:15:52.981980   86706 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"857","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1206 19:15:52.982388   86706 pod_ready.go:92] pod "etcd-multinode-593099" in "kube-system" namespace has status "Ready":"True"
	I1206 19:15:52.982405   86706 pod_ready.go:81] duration metric: took 7.458009ms waiting for pod "etcd-multinode-593099" in "kube-system" namespace to be "Ready" ...
	I1206 19:15:52.982422   86706 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-593099" in "kube-system" namespace to be "Ready" ...
	I1206 19:15:52.982477   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-593099
	I1206 19:15:52.982484   86706 round_trippers.go:469] Request Headers:
	I1206 19:15:52.982491   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:15:52.982499   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:15:52.986670   86706 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1206 19:15:52.986693   86706 round_trippers.go:577] Response Headers:
	I1206 19:15:52.986703   86706 round_trippers.go:580]     Audit-Id: 76e9d3a0-1d02-443c-aee2-d837697f8fd0
	I1206 19:15:52.986712   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:15:52.986720   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:15:52.986730   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:15:52.986742   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:15:52.986751   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:15:52 GMT
	I1206 19:15:52.987169   86706 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-593099","namespace":"kube-system","uid":"c32eea84-5395-4ffd-9fe4-51ae29b0861c","resourceVersion":"839","creationTimestamp":"2023-12-06T19:03:31Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.125:8443","kubernetes.io/config.hash":"6290493e5e32b3d1986ab88f381ba97f","kubernetes.io/config.mirror":"6290493e5e32b3d1986ab88f381ba97f","kubernetes.io/config.seen":"2023-12-06T19:03:30.652197401Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I1206 19:15:52.987634   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:15:52.987649   86706 round_trippers.go:469] Request Headers:
	I1206 19:15:52.987656   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:15:52.987662   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:15:52.989671   86706 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1206 19:15:52.989692   86706 round_trippers.go:577] Response Headers:
	I1206 19:15:52.989702   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:15:52.989711   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:15:52.989719   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:15:52.989728   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:15:52.989736   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:15:52 GMT
	I1206 19:15:52.989748   86706 round_trippers.go:580]     Audit-Id: 8f33f333-317b-434f-9623-332fc5a6c8f9
	I1206 19:15:52.990050   86706 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"857","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1206 19:15:52.990482   86706 pod_ready.go:92] pod "kube-apiserver-multinode-593099" in "kube-system" namespace has status "Ready":"True"
	I1206 19:15:52.990505   86706 pod_ready.go:81] duration metric: took 8.076837ms waiting for pod "kube-apiserver-multinode-593099" in "kube-system" namespace to be "Ready" ...
	I1206 19:15:52.990521   86706 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-593099" in "kube-system" namespace to be "Ready" ...
	I1206 19:15:52.990593   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-593099
	I1206 19:15:52.990604   86706 round_trippers.go:469] Request Headers:
	I1206 19:15:52.990616   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:15:52.990628   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:15:53.000971   86706 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1206 19:15:53.001003   86706 round_trippers.go:577] Response Headers:
	I1206 19:15:53.001013   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:15:53.001022   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:15:52 GMT
	I1206 19:15:53.001030   86706 round_trippers.go:580]     Audit-Id: ccde3e96-b05f-4aef-813f-19ebd2526f30
	I1206 19:15:53.001038   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:15:53.001046   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:15:53.001057   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:15:53.001267   86706 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-593099","namespace":"kube-system","uid":"bd10545f-240d-418a-b4ca-a48c978a56c9","resourceVersion":"826","creationTimestamp":"2023-12-06T19:03:31Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e0f1a77aff616164d10d488d27b08307","kubernetes.io/config.mirror":"e0f1a77aff616164d10d488d27b08307","kubernetes.io/config.seen":"2023-12-06T19:03:30.652198715Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I1206 19:15:53.001705   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:15:53.001720   86706 round_trippers.go:469] Request Headers:
	I1206 19:15:53.001732   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:15:53.001740   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:15:53.004220   86706 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:15:53.004240   86706 round_trippers.go:577] Response Headers:
	I1206 19:15:53.004247   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:15:53 GMT
	I1206 19:15:53.004254   86706 round_trippers.go:580]     Audit-Id: eb66665e-e8e3-407d-bbe7-b22c51f859a0
	I1206 19:15:53.004263   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:15:53.004271   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:15:53.004280   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:15:53.004288   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:15:53.004454   86706 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"857","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1206 19:15:53.004767   86706 pod_ready.go:92] pod "kube-controller-manager-multinode-593099" in "kube-system" namespace has status "Ready":"True"
	I1206 19:15:53.004781   86706 pod_ready.go:81] duration metric: took 14.254099ms waiting for pod "kube-controller-manager-multinode-593099" in "kube-system" namespace to be "Ready" ...
	I1206 19:15:53.004790   86706 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ggxmb" in "kube-system" namespace to be "Ready" ...
	I1206 19:15:53.158224   86706 request.go:629] Waited for 153.360162ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ggxmb
	I1206 19:15:53.158289   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ggxmb
	I1206 19:15:53.158294   86706 round_trippers.go:469] Request Headers:
	I1206 19:15:53.158302   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:15:53.158308   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:15:53.161452   86706 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1206 19:15:53.161472   86706 round_trippers.go:577] Response Headers:
	I1206 19:15:53.161478   86706 round_trippers.go:580]     Audit-Id: b4ab20c5-755d-4278-ba2a-cebb5fc05ae8
	I1206 19:15:53.161484   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:15:53.161489   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:15:53.161494   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:15:53.161499   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:15:53.161505   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:15:53 GMT
	I1206 19:15:53.161659   86706 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ggxmb","generateName":"kube-proxy-","namespace":"kube-system","uid":"9967a10f-783d-4e8f-bb49-f609c7227307","resourceVersion":"997","creationTimestamp":"2023-12-06T19:04:27Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"9bd0b244-d31b-4ce9-a395-f0d7b9ee08be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:04:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9bd0b244-d31b-4ce9-a395-f0d7b9ee08be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5878 chars]
	I1206 19:15:53.357365   86706 request.go:629] Waited for 195.186199ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.125:8443/api/v1/nodes/multinode-593099-m02
	I1206 19:15:53.357434   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099-m02
	I1206 19:15:53.357441   86706 round_trippers.go:469] Request Headers:
	I1206 19:15:53.357454   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:15:53.357466   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:15:53.360818   86706 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1206 19:15:53.360847   86706 round_trippers.go:577] Response Headers:
	I1206 19:15:53.360857   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:15:53.360864   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:15:53.360869   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:15:53 GMT
	I1206 19:15:53.360874   86706 round_trippers.go:580]     Audit-Id: b12a6477-2271-4b1b-8572-bb19cd086f59
	I1206 19:15:53.360882   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:15:53.360889   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:15:53.361674   86706 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099-m02","uid":"6ea06f34-1ede-44f1-9662-8cba0265fa0f","resourceVersion":"992","creationTimestamp":"2023-12-06T19:15:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_06T19_15_52_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:15:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3991 chars]
	I1206 19:15:53.557348   86706 request.go:629] Waited for 195.212976ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ggxmb
	I1206 19:15:53.557424   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ggxmb
	I1206 19:15:53.557430   86706 round_trippers.go:469] Request Headers:
	I1206 19:15:53.557438   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:15:53.557448   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:15:53.560267   86706 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:15:53.560296   86706 round_trippers.go:577] Response Headers:
	I1206 19:15:53.560308   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:15:53.560317   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:15:53.560326   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:15:53.560335   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:15:53.560343   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:15:53 GMT
	I1206 19:15:53.560354   86706 round_trippers.go:580]     Audit-Id: f00759e1-f0c8-437d-aa39-c8c36a14fb39
	I1206 19:15:53.560553   86706 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ggxmb","generateName":"kube-proxy-","namespace":"kube-system","uid":"9967a10f-783d-4e8f-bb49-f609c7227307","resourceVersion":"997","creationTimestamp":"2023-12-06T19:04:27Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"9bd0b244-d31b-4ce9-a395-f0d7b9ee08be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:04:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9bd0b244-d31b-4ce9-a395-f0d7b9ee08be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5878 chars]
	I1206 19:15:53.757402   86706 request.go:629] Waited for 196.315283ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.125:8443/api/v1/nodes/multinode-593099-m02
	I1206 19:15:53.757493   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099-m02
	I1206 19:15:53.757504   86706 round_trippers.go:469] Request Headers:
	I1206 19:15:53.757518   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:15:53.757546   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:15:53.760743   86706 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1206 19:15:53.760774   86706 round_trippers.go:577] Response Headers:
	I1206 19:15:53.760784   86706 round_trippers.go:580]     Audit-Id: 6b7951d1-a9f1-4a47-b910-5ba0181cb07b
	I1206 19:15:53.760798   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:15:53.760807   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:15:53.760815   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:15:53.760822   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:15:53.760829   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:15:53 GMT
	I1206 19:15:53.761025   86706 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099-m02","uid":"6ea06f34-1ede-44f1-9662-8cba0265fa0f","resourceVersion":"992","creationTimestamp":"2023-12-06T19:15:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_06T19_15_52_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:15:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3991 chars]
	I1206 19:15:54.262255   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ggxmb
	I1206 19:15:54.262283   86706 round_trippers.go:469] Request Headers:
	I1206 19:15:54.262302   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:15:54.262310   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:15:54.265085   86706 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:15:54.265116   86706 round_trippers.go:577] Response Headers:
	I1206 19:15:54.265127   86706 round_trippers.go:580]     Audit-Id: 57d77887-9064-40b5-9c52-45e7e189d148
	I1206 19:15:54.265136   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:15:54.265144   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:15:54.265152   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:15:54.265160   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:15:54.265173   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:15:54 GMT
	I1206 19:15:54.265359   86706 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ggxmb","generateName":"kube-proxy-","namespace":"kube-system","uid":"9967a10f-783d-4e8f-bb49-f609c7227307","resourceVersion":"1012","creationTimestamp":"2023-12-06T19:04:27Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"9bd0b244-d31b-4ce9-a395-f0d7b9ee08be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:04:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9bd0b244-d31b-4ce9-a395-f0d7b9ee08be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5723 chars]
	I1206 19:15:54.265918   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099-m02
	I1206 19:15:54.265940   86706 round_trippers.go:469] Request Headers:
	I1206 19:15:54.265951   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:15:54.265963   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:15:54.268244   86706 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:15:54.268269   86706 round_trippers.go:577] Response Headers:
	I1206 19:15:54.268280   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:15:54.268287   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:15:54.268299   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:15:54.268310   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:15:54.268321   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:15:54 GMT
	I1206 19:15:54.268332   86706 round_trippers.go:580]     Audit-Id: 62ef20ad-dfaf-417b-8453-27af6c693acd
	I1206 19:15:54.268534   86706 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099-m02","uid":"6ea06f34-1ede-44f1-9662-8cba0265fa0f","resourceVersion":"992","creationTimestamp":"2023-12-06T19:15:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_06T19_15_52_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:15:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3991 chars]
	I1206 19:15:54.268918   86706 pod_ready.go:92] pod "kube-proxy-ggxmb" in "kube-system" namespace has status "Ready":"True"
	I1206 19:15:54.268943   86706 pod_ready.go:81] duration metric: took 1.264145808s waiting for pod "kube-proxy-ggxmb" in "kube-system" namespace to be "Ready" ...
	I1206 19:15:54.268956   86706 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-thqkt" in "kube-system" namespace to be "Ready" ...
	I1206 19:15:54.358250   86706 request.go:629] Waited for 89.202619ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/kube-proxy-thqkt
	I1206 19:15:54.358307   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/kube-proxy-thqkt
	I1206 19:15:54.358312   86706 round_trippers.go:469] Request Headers:
	I1206 19:15:54.358320   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:15:54.358332   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:15:54.360923   86706 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:15:54.360945   86706 round_trippers.go:577] Response Headers:
	I1206 19:15:54.360952   86706 round_trippers.go:580]     Audit-Id: e3ad5da2-33b5-4a5b-863c-43652e198003
	I1206 19:15:54.360958   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:15:54.360962   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:15:54.360968   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:15:54.360972   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:15:54.360978   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:15:54 GMT
	I1206 19:15:54.361198   86706 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-thqkt","generateName":"kube-proxy-","namespace":"kube-system","uid":"0012fda4-56e7-4054-ab90-1704569e66e8","resourceVersion":"809","creationTimestamp":"2023-12-06T19:03:43Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"9bd0b244-d31b-4ce9-a395-f0d7b9ee08be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9bd0b244-d31b-4ce9-a395-f0d7b9ee08be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I1206 19:15:54.558046   86706 request.go:629] Waited for 196.396989ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:15:54.558110   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:15:54.558115   86706 round_trippers.go:469] Request Headers:
	I1206 19:15:54.558123   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:15:54.558129   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:15:54.560999   86706 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:15:54.561025   86706 round_trippers.go:577] Response Headers:
	I1206 19:15:54.561035   86706 round_trippers.go:580]     Audit-Id: a61f2ac2-7bbe-493b-b050-88a194882349
	I1206 19:15:54.561041   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:15:54.561046   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:15:54.561051   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:15:54.561058   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:15:54.561066   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:15:54 GMT
	I1206 19:15:54.561271   86706 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"857","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1206 19:15:54.561587   86706 pod_ready.go:92] pod "kube-proxy-thqkt" in "kube-system" namespace has status "Ready":"True"
	I1206 19:15:54.561602   86706 pod_ready.go:81] duration metric: took 292.633958ms waiting for pod "kube-proxy-thqkt" in "kube-system" namespace to be "Ready" ...
	I1206 19:15:54.561610   86706 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tp2wm" in "kube-system" namespace to be "Ready" ...
	I1206 19:15:54.758036   86706 request.go:629] Waited for 196.353927ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tp2wm
	I1206 19:15:54.758117   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tp2wm
	I1206 19:15:54.758125   86706 round_trippers.go:469] Request Headers:
	I1206 19:15:54.758138   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:15:54.758147   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:15:54.760929   86706 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:15:54.760959   86706 round_trippers.go:577] Response Headers:
	I1206 19:15:54.760970   86706 round_trippers.go:580]     Audit-Id: 6d653c26-099e-4cd1-8aee-5ca952a26d70
	I1206 19:15:54.760979   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:15:54.760987   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:15:54.760999   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:15:54.761009   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:15:54.761020   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:15:54 GMT
	I1206 19:15:54.761210   86706 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tp2wm","generateName":"kube-proxy-","namespace":"kube-system","uid":"366b51c9-af8f-4bd5-8200-dc43c4a3017c","resourceVersion":"676","creationTimestamp":"2023-12-06T19:05:15Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"9bd0b244-d31b-4ce9-a395-f0d7b9ee08be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:05:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9bd0b244-d31b-4ce9-a395-f0d7b9ee08be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I1206 19:15:54.958072   86706 request.go:629] Waited for 196.405082ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.125:8443/api/v1/nodes/multinode-593099-m03
	I1206 19:15:54.958147   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099-m03
	I1206 19:15:54.958155   86706 round_trippers.go:469] Request Headers:
	I1206 19:15:54.958163   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:15:54.958172   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:15:54.960881   86706 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:15:54.960903   86706 round_trippers.go:577] Response Headers:
	I1206 19:15:54.960911   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:15:54.960916   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:15:54.960922   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:15:54.960927   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:15:54 GMT
	I1206 19:15:54.960934   86706 round_trippers.go:580]     Audit-Id: 491313fd-ce8d-4bd9-85f4-82b56a137bb9
	I1206 19:15:54.960939   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:15:54.961176   86706 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099-m03","uid":"a37befac-9ea6-49a7-a8c3-a9b16981befa","resourceVersion":"993","creationTimestamp":"2023-12-06T19:05:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_06T19_15_52_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:05:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 3965 chars]
	I1206 19:15:54.961493   86706 pod_ready.go:92] pod "kube-proxy-tp2wm" in "kube-system" namespace has status "Ready":"True"
	I1206 19:15:54.961513   86706 pod_ready.go:81] duration metric: took 399.89781ms waiting for pod "kube-proxy-tp2wm" in "kube-system" namespace to be "Ready" ...
	I1206 19:15:54.961522   86706 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-593099" in "kube-system" namespace to be "Ready" ...
	I1206 19:15:55.158010   86706 request.go:629] Waited for 196.40321ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-593099
	I1206 19:15:55.158088   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-593099
	I1206 19:15:55.158095   86706 round_trippers.go:469] Request Headers:
	I1206 19:15:55.158105   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:15:55.158113   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:15:55.160632   86706 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:15:55.160662   86706 round_trippers.go:577] Response Headers:
	I1206 19:15:55.160673   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:15:55.160682   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:15:55 GMT
	I1206 19:15:55.160691   86706 round_trippers.go:580]     Audit-Id: 0a37f25d-11f7-4adb-8650-11cc571e647b
	I1206 19:15:55.160698   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:15:55.160707   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:15:55.160714   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:15:55.161323   86706 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-593099","namespace":"kube-system","uid":"7ae8a659-33ba-4e2b-9211-8d84efe7e5a4","resourceVersion":"831","creationTimestamp":"2023-12-06T19:03:28Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"c031365adbae2937d228cc911fbfd7d4","kubernetes.io/config.mirror":"c031365adbae2937d228cc911fbfd7d4","kubernetes.io/config.seen":"2023-12-06T19:03:21.456083881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I1206 19:15:55.358104   86706 request.go:629] Waited for 196.402982ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:15:55.358168   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:15:55.358173   86706 round_trippers.go:469] Request Headers:
	I1206 19:15:55.358181   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:15:55.358187   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:15:55.360994   86706 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:15:55.361015   86706 round_trippers.go:577] Response Headers:
	I1206 19:15:55.361022   86706 round_trippers.go:580]     Audit-Id: 16885ad7-1496-4b62-9dbe-dd2764954c2b
	I1206 19:15:55.361028   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:15:55.361033   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:15:55.361038   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:15:55.361047   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:15:55.361052   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:15:55 GMT
	I1206 19:15:55.361256   86706 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"857","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1206 19:15:55.361669   86706 pod_ready.go:92] pod "kube-scheduler-multinode-593099" in "kube-system" namespace has status "Ready":"True"
	I1206 19:15:55.361689   86706 pod_ready.go:81] duration metric: took 400.161053ms waiting for pod "kube-scheduler-multinode-593099" in "kube-system" namespace to be "Ready" ...
	I1206 19:15:55.361702   86706 pod_ready.go:38] duration metric: took 2.400809307s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 19:15:55.361716   86706 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 19:15:55.361770   86706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 19:15:55.375732   86706 system_svc.go:56] duration metric: took 14.005768ms WaitForService to wait for kubelet.
	I1206 19:15:55.375766   86706 kubeadm.go:581] duration metric: took 2.437539719s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1206 19:15:55.375786   86706 node_conditions.go:102] verifying NodePressure condition ...
	I1206 19:15:55.558221   86706 request.go:629] Waited for 182.362594ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.125:8443/api/v1/nodes
	I1206 19:15:55.558372   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes
	I1206 19:15:55.558385   86706 round_trippers.go:469] Request Headers:
	I1206 19:15:55.558393   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:15:55.558400   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:15:55.561510   86706 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1206 19:15:55.561538   86706 round_trippers.go:577] Response Headers:
	I1206 19:15:55.561549   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:15:55 GMT
	I1206 19:15:55.561557   86706 round_trippers.go:580]     Audit-Id: dc531eee-ba94-4b70-bdd8-87153b6c34f0
	I1206 19:15:55.561566   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:15:55.561574   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:15:55.561583   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:15:55.561591   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:15:55.562215   86706 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1014"},"items":[{"metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"857","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 16207 chars]
	I1206 19:15:55.562897   86706 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1206 19:15:55.562943   86706 node_conditions.go:123] node cpu capacity is 2
	I1206 19:15:55.562954   86706 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1206 19:15:55.562958   86706 node_conditions.go:123] node cpu capacity is 2
	I1206 19:15:55.562963   86706 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1206 19:15:55.562966   86706 node_conditions.go:123] node cpu capacity is 2
	I1206 19:15:55.562970   86706 node_conditions.go:105] duration metric: took 187.181023ms to run NodePressure ...
	I1206 19:15:55.562984   86706 start.go:228] waiting for startup goroutines ...
	I1206 19:15:55.563004   86706 start.go:242] writing updated cluster config ...
	I1206 19:15:55.563421   86706 config.go:182] Loaded profile config "multinode-593099": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 19:15:55.563498   86706 profile.go:148] Saving config to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/config.json ...
	I1206 19:15:55.566507   86706 out.go:177] * Starting worker node multinode-593099-m03 in cluster multinode-593099
	I1206 19:15:55.567928   86706 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1206 19:15:55.567962   86706 cache.go:56] Caching tarball of preloaded images
	I1206 19:15:55.568076   86706 preload.go:174] Found /home/jenkins/minikube-integration/17740-63652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1206 19:15:55.568088   86706 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1206 19:15:55.568185   86706 profile.go:148] Saving config to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/config.json ...
	I1206 19:15:55.568358   86706 start.go:365] acquiring machines lock for multinode-593099-m03: {Name:mk49ce640266d8c664a871ed4989f65c26b6fae1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1206 19:15:55.568400   86706 start.go:369] acquired machines lock for "multinode-593099-m03" in 23.529µs
	I1206 19:15:55.568414   86706 start.go:96] Skipping create...Using existing machine configuration
	I1206 19:15:55.568419   86706 fix.go:54] fixHost starting: m03
	I1206 19:15:55.568670   86706 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:15:55.568701   86706 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:15:55.583084   86706 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33633
	I1206 19:15:55.583575   86706 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:15:55.584079   86706 main.go:141] libmachine: Using API Version  1
	I1206 19:15:55.584106   86706 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:15:55.584429   86706 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:15:55.584625   86706 main.go:141] libmachine: (multinode-593099-m03) Calling .DriverName
	I1206 19:15:55.584780   86706 main.go:141] libmachine: (multinode-593099-m03) Calling .GetState
	I1206 19:15:55.586358   86706 fix.go:102] recreateIfNeeded on multinode-593099-m03: state=Running err=<nil>
	W1206 19:15:55.586374   86706 fix.go:128] unexpected machine state, will restart: <nil>
	I1206 19:15:55.588393   86706 out.go:177] * Updating the running kvm2 "multinode-593099-m03" VM ...
	I1206 19:15:55.589943   86706 machine.go:88] provisioning docker machine ...
	I1206 19:15:55.589974   86706 main.go:141] libmachine: (multinode-593099-m03) Calling .DriverName
	I1206 19:15:55.590215   86706 main.go:141] libmachine: (multinode-593099-m03) Calling .GetMachineName
	I1206 19:15:55.590394   86706 buildroot.go:166] provisioning hostname "multinode-593099-m03"
	I1206 19:15:55.590419   86706 main.go:141] libmachine: (multinode-593099-m03) Calling .GetMachineName
	I1206 19:15:55.590563   86706 main.go:141] libmachine: (multinode-593099-m03) Calling .GetSSHHostname
	I1206 19:15:55.592811   86706 main.go:141] libmachine: (multinode-593099-m03) DBG | domain multinode-593099-m03 has defined MAC address 52:54:00:3d:72:d8 in network mk-multinode-593099
	I1206 19:15:55.593312   86706 main.go:141] libmachine: (multinode-593099-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:72:d8", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:05:52 +0000 UTC Type:0 Mac:52:54:00:3d:72:d8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:multinode-593099-m03 Clientid:01:52:54:00:3d:72:d8}
	I1206 19:15:55.593343   86706 main.go:141] libmachine: (multinode-593099-m03) DBG | domain multinode-593099-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:3d:72:d8 in network mk-multinode-593099
	I1206 19:15:55.593494   86706 main.go:141] libmachine: (multinode-593099-m03) Calling .GetSSHPort
	I1206 19:15:55.593647   86706 main.go:141] libmachine: (multinode-593099-m03) Calling .GetSSHKeyPath
	I1206 19:15:55.593765   86706 main.go:141] libmachine: (multinode-593099-m03) Calling .GetSSHKeyPath
	I1206 19:15:55.593928   86706 main.go:141] libmachine: (multinode-593099-m03) Calling .GetSSHUsername
	I1206 19:15:55.594082   86706 main.go:141] libmachine: Using SSH client type: native
	I1206 19:15:55.594387   86706 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I1206 19:15:55.594403   86706 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-593099-m03 && echo "multinode-593099-m03" | sudo tee /etc/hostname
	I1206 19:15:55.727686   86706 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-593099-m03
	
	I1206 19:15:55.727721   86706 main.go:141] libmachine: (multinode-593099-m03) Calling .GetSSHHostname
	I1206 19:15:55.730783   86706 main.go:141] libmachine: (multinode-593099-m03) DBG | domain multinode-593099-m03 has defined MAC address 52:54:00:3d:72:d8 in network mk-multinode-593099
	I1206 19:15:55.731188   86706 main.go:141] libmachine: (multinode-593099-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:72:d8", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:05:52 +0000 UTC Type:0 Mac:52:54:00:3d:72:d8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:multinode-593099-m03 Clientid:01:52:54:00:3d:72:d8}
	I1206 19:15:55.731210   86706 main.go:141] libmachine: (multinode-593099-m03) DBG | domain multinode-593099-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:3d:72:d8 in network mk-multinode-593099
	I1206 19:15:55.731429   86706 main.go:141] libmachine: (multinode-593099-m03) Calling .GetSSHPort
	I1206 19:15:55.731599   86706 main.go:141] libmachine: (multinode-593099-m03) Calling .GetSSHKeyPath
	I1206 19:15:55.731717   86706 main.go:141] libmachine: (multinode-593099-m03) Calling .GetSSHKeyPath
	I1206 19:15:55.731872   86706 main.go:141] libmachine: (multinode-593099-m03) Calling .GetSSHUsername
	I1206 19:15:55.732053   86706 main.go:141] libmachine: Using SSH client type: native
	I1206 19:15:55.732361   86706 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I1206 19:15:55.732377   86706 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-593099-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-593099-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-593099-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 19:15:55.850634   86706 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1206 19:15:55.850669   86706 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17740-63652/.minikube CaCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17740-63652/.minikube}
	I1206 19:15:55.850694   86706 buildroot.go:174] setting up certificates
	I1206 19:15:55.850706   86706 provision.go:83] configureAuth start
	I1206 19:15:55.850715   86706 main.go:141] libmachine: (multinode-593099-m03) Calling .GetMachineName
	I1206 19:15:55.850992   86706 main.go:141] libmachine: (multinode-593099-m03) Calling .GetIP
	I1206 19:15:55.853679   86706 main.go:141] libmachine: (multinode-593099-m03) DBG | domain multinode-593099-m03 has defined MAC address 52:54:00:3d:72:d8 in network mk-multinode-593099
	I1206 19:15:55.854127   86706 main.go:141] libmachine: (multinode-593099-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:72:d8", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:05:52 +0000 UTC Type:0 Mac:52:54:00:3d:72:d8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:multinode-593099-m03 Clientid:01:52:54:00:3d:72:d8}
	I1206 19:15:55.854162   86706 main.go:141] libmachine: (multinode-593099-m03) DBG | domain multinode-593099-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:3d:72:d8 in network mk-multinode-593099
	I1206 19:15:55.854315   86706 main.go:141] libmachine: (multinode-593099-m03) Calling .GetSSHHostname
	I1206 19:15:55.856602   86706 main.go:141] libmachine: (multinode-593099-m03) DBG | domain multinode-593099-m03 has defined MAC address 52:54:00:3d:72:d8 in network mk-multinode-593099
	I1206 19:15:55.857053   86706 main.go:141] libmachine: (multinode-593099-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:72:d8", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:05:52 +0000 UTC Type:0 Mac:52:54:00:3d:72:d8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:multinode-593099-m03 Clientid:01:52:54:00:3d:72:d8}
	I1206 19:15:55.857084   86706 main.go:141] libmachine: (multinode-593099-m03) DBG | domain multinode-593099-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:3d:72:d8 in network mk-multinode-593099
	I1206 19:15:55.857261   86706 provision.go:138] copyHostCerts
	I1206 19:15:55.857295   86706 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem
	I1206 19:15:55.857332   86706 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem, removing ...
	I1206 19:15:55.857343   86706 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem
	I1206 19:15:55.857431   86706 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem (1123 bytes)
	I1206 19:15:55.857530   86706 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem
	I1206 19:15:55.857554   86706 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem, removing ...
	I1206 19:15:55.857564   86706 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem
	I1206 19:15:55.857604   86706 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem (1679 bytes)
	I1206 19:15:55.857673   86706 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem
	I1206 19:15:55.857698   86706 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem, removing ...
	I1206 19:15:55.857703   86706 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem
	I1206 19:15:55.857735   86706 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem (1082 bytes)
	I1206 19:15:55.857805   86706 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem org=jenkins.multinode-593099-m03 san=[192.168.39.194 192.168.39.194 localhost 127.0.0.1 minikube multinode-593099-m03]
	I1206 19:15:56.272957   86706 provision.go:172] copyRemoteCerts
	I1206 19:15:56.273080   86706 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 19:15:56.273122   86706 main.go:141] libmachine: (multinode-593099-m03) Calling .GetSSHHostname
	I1206 19:15:56.276018   86706 main.go:141] libmachine: (multinode-593099-m03) DBG | domain multinode-593099-m03 has defined MAC address 52:54:00:3d:72:d8 in network mk-multinode-593099
	I1206 19:15:56.276462   86706 main.go:141] libmachine: (multinode-593099-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:72:d8", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:05:52 +0000 UTC Type:0 Mac:52:54:00:3d:72:d8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:multinode-593099-m03 Clientid:01:52:54:00:3d:72:d8}
	I1206 19:15:56.276489   86706 main.go:141] libmachine: (multinode-593099-m03) DBG | domain multinode-593099-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:3d:72:d8 in network mk-multinode-593099
	I1206 19:15:56.276744   86706 main.go:141] libmachine: (multinode-593099-m03) Calling .GetSSHPort
	I1206 19:15:56.277013   86706 main.go:141] libmachine: (multinode-593099-m03) Calling .GetSSHKeyPath
	I1206 19:15:56.277184   86706 main.go:141] libmachine: (multinode-593099-m03) Calling .GetSSHUsername
	I1206 19:15:56.277352   86706 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/multinode-593099-m03/id_rsa Username:docker}
	I1206 19:15:56.367582   86706 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1206 19:15:56.367671   86706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1206 19:15:56.392120   86706 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1206 19:15:56.392209   86706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 19:15:56.418596   86706 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1206 19:15:56.418674   86706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 19:15:56.445986   86706 provision.go:86] duration metric: configureAuth took 595.264249ms
	I1206 19:15:56.446018   86706 buildroot.go:189] setting minikube options for container-runtime
	I1206 19:15:56.446259   86706 config.go:182] Loaded profile config "multinode-593099": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 19:15:56.446349   86706 main.go:141] libmachine: (multinode-593099-m03) Calling .GetSSHHostname
	I1206 19:15:56.449132   86706 main.go:141] libmachine: (multinode-593099-m03) DBG | domain multinode-593099-m03 has defined MAC address 52:54:00:3d:72:d8 in network mk-multinode-593099
	I1206 19:15:56.449477   86706 main.go:141] libmachine: (multinode-593099-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:72:d8", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:05:52 +0000 UTC Type:0 Mac:52:54:00:3d:72:d8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:multinode-593099-m03 Clientid:01:52:54:00:3d:72:d8}
	I1206 19:15:56.449502   86706 main.go:141] libmachine: (multinode-593099-m03) DBG | domain multinode-593099-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:3d:72:d8 in network mk-multinode-593099
	I1206 19:15:56.449643   86706 main.go:141] libmachine: (multinode-593099-m03) Calling .GetSSHPort
	I1206 19:15:56.449841   86706 main.go:141] libmachine: (multinode-593099-m03) Calling .GetSSHKeyPath
	I1206 19:15:56.450075   86706 main.go:141] libmachine: (multinode-593099-m03) Calling .GetSSHKeyPath
	I1206 19:15:56.450244   86706 main.go:141] libmachine: (multinode-593099-m03) Calling .GetSSHUsername
	I1206 19:15:56.450476   86706 main.go:141] libmachine: Using SSH client type: native
	I1206 19:15:56.450876   86706 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I1206 19:15:56.450895   86706 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 19:17:27.132125   86706 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 19:17:27.132200   86706 machine.go:91] provisioned docker machine in 1m31.542236213s
	I1206 19:17:27.132214   86706 start.go:300] post-start starting for "multinode-593099-m03" (driver="kvm2")
	I1206 19:17:27.132257   86706 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 19:17:27.132283   86706 main.go:141] libmachine: (multinode-593099-m03) Calling .DriverName
	I1206 19:17:27.132619   86706 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 19:17:27.132700   86706 main.go:141] libmachine: (multinode-593099-m03) Calling .GetSSHHostname
	I1206 19:17:27.135729   86706 main.go:141] libmachine: (multinode-593099-m03) DBG | domain multinode-593099-m03 has defined MAC address 52:54:00:3d:72:d8 in network mk-multinode-593099
	I1206 19:17:27.136191   86706 main.go:141] libmachine: (multinode-593099-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:72:d8", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:05:52 +0000 UTC Type:0 Mac:52:54:00:3d:72:d8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:multinode-593099-m03 Clientid:01:52:54:00:3d:72:d8}
	I1206 19:17:27.136222   86706 main.go:141] libmachine: (multinode-593099-m03) DBG | domain multinode-593099-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:3d:72:d8 in network mk-multinode-593099
	I1206 19:17:27.136337   86706 main.go:141] libmachine: (multinode-593099-m03) Calling .GetSSHPort
	I1206 19:17:27.136540   86706 main.go:141] libmachine: (multinode-593099-m03) Calling .GetSSHKeyPath
	I1206 19:17:27.136721   86706 main.go:141] libmachine: (multinode-593099-m03) Calling .GetSSHUsername
	I1206 19:17:27.136825   86706 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/multinode-593099-m03/id_rsa Username:docker}
	I1206 19:17:27.228521   86706 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 19:17:27.232887   86706 command_runner.go:130] > NAME=Buildroot
	I1206 19:17:27.232908   86706 command_runner.go:130] > VERSION=2021.02.12-1-gf888a99-dirty
	I1206 19:17:27.232912   86706 command_runner.go:130] > ID=buildroot
	I1206 19:17:27.232918   86706 command_runner.go:130] > VERSION_ID=2021.02.12
	I1206 19:17:27.232923   86706 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1206 19:17:27.233293   86706 info.go:137] Remote host: Buildroot 2021.02.12
	I1206 19:17:27.233316   86706 filesync.go:126] Scanning /home/jenkins/minikube-integration/17740-63652/.minikube/addons for local assets ...
	I1206 19:17:27.233392   86706 filesync.go:126] Scanning /home/jenkins/minikube-integration/17740-63652/.minikube/files for local assets ...
	I1206 19:17:27.233484   86706 filesync.go:149] local asset: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem -> 708342.pem in /etc/ssl/certs
	I1206 19:17:27.233497   86706 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem -> /etc/ssl/certs/708342.pem
	I1206 19:17:27.233601   86706 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 19:17:27.244480   86706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem --> /etc/ssl/certs/708342.pem (1708 bytes)
	I1206 19:17:27.270153   86706 start.go:303] post-start completed in 137.905565ms
	I1206 19:17:27.270183   86706 fix.go:56] fixHost completed within 1m31.701763788s
	I1206 19:17:27.270233   86706 main.go:141] libmachine: (multinode-593099-m03) Calling .GetSSHHostname
	I1206 19:17:27.272939   86706 main.go:141] libmachine: (multinode-593099-m03) DBG | domain multinode-593099-m03 has defined MAC address 52:54:00:3d:72:d8 in network mk-multinode-593099
	I1206 19:17:27.273391   86706 main.go:141] libmachine: (multinode-593099-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:72:d8", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:05:52 +0000 UTC Type:0 Mac:52:54:00:3d:72:d8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:multinode-593099-m03 Clientid:01:52:54:00:3d:72:d8}
	I1206 19:17:27.273435   86706 main.go:141] libmachine: (multinode-593099-m03) DBG | domain multinode-593099-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:3d:72:d8 in network mk-multinode-593099
	I1206 19:17:27.273587   86706 main.go:141] libmachine: (multinode-593099-m03) Calling .GetSSHPort
	I1206 19:17:27.273847   86706 main.go:141] libmachine: (multinode-593099-m03) Calling .GetSSHKeyPath
	I1206 19:17:27.274040   86706 main.go:141] libmachine: (multinode-593099-m03) Calling .GetSSHKeyPath
	I1206 19:17:27.274185   86706 main.go:141] libmachine: (multinode-593099-m03) Calling .GetSSHUsername
	I1206 19:17:27.274335   86706 main.go:141] libmachine: Using SSH client type: native
	I1206 19:17:27.274658   86706 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I1206 19:17:27.274670   86706 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1206 19:17:27.394258   86706 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701890247.386653213
	
	I1206 19:17:27.394288   86706 fix.go:206] guest clock: 1701890247.386653213
	I1206 19:17:27.394300   86706 fix.go:219] Guest: 2023-12-06 19:17:27.386653213 +0000 UTC Remote: 2023-12-06 19:17:27.270187435 +0000 UTC m=+555.815539181 (delta=116.465778ms)
	I1206 19:17:27.394321   86706 fix.go:190] guest clock delta is within tolerance: 116.465778ms
	I1206 19:17:27.394327   86706 start.go:83] releasing machines lock for "multinode-593099-m03", held for 1m31.825918084s
	I1206 19:17:27.394382   86706 main.go:141] libmachine: (multinode-593099-m03) Calling .DriverName
	I1206 19:17:27.394652   86706 main.go:141] libmachine: (multinode-593099-m03) Calling .GetIP
	I1206 19:17:27.397073   86706 main.go:141] libmachine: (multinode-593099-m03) DBG | domain multinode-593099-m03 has defined MAC address 52:54:00:3d:72:d8 in network mk-multinode-593099
	I1206 19:17:27.397505   86706 main.go:141] libmachine: (multinode-593099-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:72:d8", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:05:52 +0000 UTC Type:0 Mac:52:54:00:3d:72:d8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:multinode-593099-m03 Clientid:01:52:54:00:3d:72:d8}
	I1206 19:17:27.397529   86706 main.go:141] libmachine: (multinode-593099-m03) DBG | domain multinode-593099-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:3d:72:d8 in network mk-multinode-593099
	I1206 19:17:27.399809   86706 out.go:177] * Found network options:
	I1206 19:17:27.401683   86706 out.go:177]   - NO_PROXY=192.168.39.125,192.168.39.6
	W1206 19:17:27.403161   86706 proxy.go:119] fail to check proxy env: Error ip not in block
	W1206 19:17:27.403198   86706 proxy.go:119] fail to check proxy env: Error ip not in block
	I1206 19:17:27.403211   86706 main.go:141] libmachine: (multinode-593099-m03) Calling .DriverName
	I1206 19:17:27.403769   86706 main.go:141] libmachine: (multinode-593099-m03) Calling .DriverName
	I1206 19:17:27.403954   86706 main.go:141] libmachine: (multinode-593099-m03) Calling .DriverName
	I1206 19:17:27.404048   86706 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 19:17:27.404079   86706 main.go:141] libmachine: (multinode-593099-m03) Calling .GetSSHHostname
	W1206 19:17:27.404141   86706 proxy.go:119] fail to check proxy env: Error ip not in block
	W1206 19:17:27.404162   86706 proxy.go:119] fail to check proxy env: Error ip not in block
	I1206 19:17:27.404309   86706 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 19:17:27.404331   86706 main.go:141] libmachine: (multinode-593099-m03) Calling .GetSSHHostname
	I1206 19:17:27.406995   86706 main.go:141] libmachine: (multinode-593099-m03) DBG | domain multinode-593099-m03 has defined MAC address 52:54:00:3d:72:d8 in network mk-multinode-593099
	I1206 19:17:27.407132   86706 main.go:141] libmachine: (multinode-593099-m03) DBG | domain multinode-593099-m03 has defined MAC address 52:54:00:3d:72:d8 in network mk-multinode-593099
	I1206 19:17:27.407408   86706 main.go:141] libmachine: (multinode-593099-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:72:d8", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:05:52 +0000 UTC Type:0 Mac:52:54:00:3d:72:d8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:multinode-593099-m03 Clientid:01:52:54:00:3d:72:d8}
	I1206 19:17:27.407436   86706 main.go:141] libmachine: (multinode-593099-m03) DBG | domain multinode-593099-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:3d:72:d8 in network mk-multinode-593099
	I1206 19:17:27.407589   86706 main.go:141] libmachine: (multinode-593099-m03) Calling .GetSSHPort
	I1206 19:17:27.407628   86706 main.go:141] libmachine: (multinode-593099-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:72:d8", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:05:52 +0000 UTC Type:0 Mac:52:54:00:3d:72:d8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:multinode-593099-m03 Clientid:01:52:54:00:3d:72:d8}
	I1206 19:17:27.407658   86706 main.go:141] libmachine: (multinode-593099-m03) DBG | domain multinode-593099-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:3d:72:d8 in network mk-multinode-593099
	I1206 19:17:27.407769   86706 main.go:141] libmachine: (multinode-593099-m03) Calling .GetSSHKeyPath
	I1206 19:17:27.407843   86706 main.go:141] libmachine: (multinode-593099-m03) Calling .GetSSHPort
	I1206 19:17:27.407944   86706 main.go:141] libmachine: (multinode-593099-m03) Calling .GetSSHUsername
	I1206 19:17:27.408015   86706 main.go:141] libmachine: (multinode-593099-m03) Calling .GetSSHKeyPath
	I1206 19:17:27.408091   86706 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/multinode-593099-m03/id_rsa Username:docker}
	I1206 19:17:27.408117   86706 main.go:141] libmachine: (multinode-593099-m03) Calling .GetSSHUsername
	I1206 19:17:27.408248   86706 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/multinode-593099-m03/id_rsa Username:docker}
	I1206 19:17:27.522358   86706 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1206 19:17:27.640839   86706 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1206 19:17:27.646694   86706 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1206 19:17:27.647063   86706 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 19:17:27.647136   86706 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 19:17:27.658155   86706 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 19:17:27.658187   86706 start.go:475] detecting cgroup driver to use...
	I1206 19:17:27.658265   86706 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 19:17:27.674973   86706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 19:17:27.690438   86706 docker.go:203] disabling cri-docker service (if available) ...
	I1206 19:17:27.690491   86706 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 19:17:27.705044   86706 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 19:17:27.718504   86706 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 19:17:27.841060   86706 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 19:17:27.993828   86706 docker.go:219] disabling docker service ...
	I1206 19:17:27.993909   86706 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 19:17:28.009852   86706 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 19:17:28.024026   86706 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 19:17:28.148419   86706 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 19:17:28.273416   86706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 19:17:28.287456   86706 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 19:17:28.304665   86706 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1206 19:17:28.304725   86706 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1206 19:17:28.304771   86706 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:17:28.315980   86706 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 19:17:28.316053   86706 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:17:28.326553   86706 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:17:28.341299   86706 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:17:28.393290   86706 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 19:17:28.426785   86706 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 19:17:28.448791   86706 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1206 19:17:28.448890   86706 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 19:17:28.463239   86706 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 19:17:28.596711   86706 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 19:17:31.785122   86706 ssh_runner.go:235] Completed: sudo systemctl restart crio: (3.188370682s)
	I1206 19:17:31.785155   86706 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 19:17:31.785218   86706 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 19:17:31.793086   86706 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1206 19:17:31.793113   86706 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1206 19:17:31.793124   86706 command_runner.go:130] > Device: 16h/22d	Inode: 1263        Links: 1
	I1206 19:17:31.793135   86706 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1206 19:17:31.793145   86706 command_runner.go:130] > Access: 2023-12-06 19:17:31.684921497 +0000
	I1206 19:17:31.793153   86706 command_runner.go:130] > Modify: 2023-12-06 19:17:31.684921497 +0000
	I1206 19:17:31.793165   86706 command_runner.go:130] > Change: 2023-12-06 19:17:31.684921497 +0000
	I1206 19:17:31.793171   86706 command_runner.go:130] >  Birth: -
	I1206 19:17:31.793199   86706 start.go:543] Will wait 60s for crictl version
	I1206 19:17:31.793262   86706 ssh_runner.go:195] Run: which crictl
	I1206 19:17:31.796838   86706 command_runner.go:130] > /usr/bin/crictl
	I1206 19:17:31.797127   86706 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1206 19:17:31.836556   86706 command_runner.go:130] > Version:  0.1.0
	I1206 19:17:31.836584   86706 command_runner.go:130] > RuntimeName:  cri-o
	I1206 19:17:31.836588   86706 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1206 19:17:31.836594   86706 command_runner.go:130] > RuntimeApiVersion:  v1
	I1206 19:17:31.837797   86706 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1206 19:17:31.837877   86706 ssh_runner.go:195] Run: crio --version
	I1206 19:17:31.889062   86706 command_runner.go:130] > crio version 1.24.1
	I1206 19:17:31.889091   86706 command_runner.go:130] > Version:          1.24.1
	I1206 19:17:31.889101   86706 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1206 19:17:31.889107   86706 command_runner.go:130] > GitTreeState:     dirty
	I1206 19:17:31.889115   86706 command_runner.go:130] > BuildDate:        2023-12-01T05:08:03Z
	I1206 19:17:31.889122   86706 command_runner.go:130] > GoVersion:        go1.19.9
	I1206 19:17:31.889129   86706 command_runner.go:130] > Compiler:         gc
	I1206 19:17:31.889137   86706 command_runner.go:130] > Platform:         linux/amd64
	I1206 19:17:31.889149   86706 command_runner.go:130] > Linkmode:         dynamic
	I1206 19:17:31.889163   86706 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1206 19:17:31.889173   86706 command_runner.go:130] > SeccompEnabled:   true
	I1206 19:17:31.889182   86706 command_runner.go:130] > AppArmorEnabled:  false
	I1206 19:17:31.889461   86706 ssh_runner.go:195] Run: crio --version
	I1206 19:17:31.938411   86706 command_runner.go:130] > crio version 1.24.1
	I1206 19:17:31.938436   86706 command_runner.go:130] > Version:          1.24.1
	I1206 19:17:31.938449   86706 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1206 19:17:31.938456   86706 command_runner.go:130] > GitTreeState:     dirty
	I1206 19:17:31.938465   86706 command_runner.go:130] > BuildDate:        2023-12-01T05:08:03Z
	I1206 19:17:31.938471   86706 command_runner.go:130] > GoVersion:        go1.19.9
	I1206 19:17:31.938477   86706 command_runner.go:130] > Compiler:         gc
	I1206 19:17:31.938483   86706 command_runner.go:130] > Platform:         linux/amd64
	I1206 19:17:31.938493   86706 command_runner.go:130] > Linkmode:         dynamic
	I1206 19:17:31.938504   86706 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1206 19:17:31.938515   86706 command_runner.go:130] > SeccompEnabled:   true
	I1206 19:17:31.938524   86706 command_runner.go:130] > AppArmorEnabled:  false
	I1206 19:17:31.940540   86706 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1206 19:17:31.942084   86706 out.go:177]   - env NO_PROXY=192.168.39.125
	I1206 19:17:31.943427   86706 out.go:177]   - env NO_PROXY=192.168.39.125,192.168.39.6
	I1206 19:17:31.944633   86706 main.go:141] libmachine: (multinode-593099-m03) Calling .GetIP
	I1206 19:17:31.947378   86706 main.go:141] libmachine: (multinode-593099-m03) DBG | domain multinode-593099-m03 has defined MAC address 52:54:00:3d:72:d8 in network mk-multinode-593099
	I1206 19:17:31.947781   86706 main.go:141] libmachine: (multinode-593099-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:72:d8", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:05:52 +0000 UTC Type:0 Mac:52:54:00:3d:72:d8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:multinode-593099-m03 Clientid:01:52:54:00:3d:72:d8}
	I1206 19:17:31.947827   86706 main.go:141] libmachine: (multinode-593099-m03) DBG | domain multinode-593099-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:3d:72:d8 in network mk-multinode-593099
	I1206 19:17:31.948064   86706 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1206 19:17:31.952354   86706 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1206 19:17:31.952658   86706 certs.go:56] Setting up /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099 for IP: 192.168.39.194
	I1206 19:17:31.952711   86706 certs.go:190] acquiring lock for shared ca certs: {Name:mkf8fbf7b590617ef4dc6c3a4acb742ae26f89ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 19:17:31.952897   86706 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.key
	I1206 19:17:31.952937   86706 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.key
	I1206 19:17:31.952950   86706 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1206 19:17:31.952965   86706 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1206 19:17:31.952979   86706 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1206 19:17:31.952993   86706 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1206 19:17:31.953042   86706 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834.pem (1338 bytes)
	W1206 19:17:31.953077   86706 certs.go:433] ignoring /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834_empty.pem, impossibly tiny 0 bytes
	I1206 19:17:31.953087   86706 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 19:17:31.953112   86706 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem (1082 bytes)
	I1206 19:17:31.953136   86706 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem (1123 bytes)
	I1206 19:17:31.953158   86706 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem (1679 bytes)
	I1206 19:17:31.953196   86706 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem (1708 bytes)
	I1206 19:17:31.953220   86706 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem -> /usr/share/ca-certificates/708342.pem
	I1206 19:17:31.953246   86706 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:17:31.953264   86706 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834.pem -> /usr/share/ca-certificates/70834.pem
	I1206 19:17:31.953734   86706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 19:17:31.976485   86706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 19:17:31.999665   86706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 19:17:32.022747   86706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 19:17:32.045221   86706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem --> /usr/share/ca-certificates/708342.pem (1708 bytes)
	I1206 19:17:32.066901   86706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 19:17:32.089648   86706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834.pem --> /usr/share/ca-certificates/70834.pem (1338 bytes)
	I1206 19:17:32.112224   86706 ssh_runner.go:195] Run: openssl version
	I1206 19:17:32.117838   86706 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1206 19:17:32.117924   86706 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/70834.pem && ln -fs /usr/share/ca-certificates/70834.pem /etc/ssl/certs/70834.pem"
	I1206 19:17:32.128374   86706 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/70834.pem
	I1206 19:17:32.133175   86706 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  6 18:50 /usr/share/ca-certificates/70834.pem
	I1206 19:17:32.133464   86706 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  6 18:50 /usr/share/ca-certificates/70834.pem
	I1206 19:17:32.133523   86706 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/70834.pem
	I1206 19:17:32.138687   86706 command_runner.go:130] > 51391683
	I1206 19:17:32.138930   86706 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/70834.pem /etc/ssl/certs/51391683.0"
	I1206 19:17:32.147824   86706 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/708342.pem && ln -fs /usr/share/ca-certificates/708342.pem /etc/ssl/certs/708342.pem"
	I1206 19:17:32.158119   86706 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/708342.pem
	I1206 19:17:32.162361   86706 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  6 18:50 /usr/share/ca-certificates/708342.pem
	I1206 19:17:32.162479   86706 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  6 18:50 /usr/share/ca-certificates/708342.pem
	I1206 19:17:32.162527   86706 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/708342.pem
	I1206 19:17:32.167820   86706 command_runner.go:130] > 3ec20f2e
	I1206 19:17:32.167905   86706 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/708342.pem /etc/ssl/certs/3ec20f2e.0"
	I1206 19:17:32.176820   86706 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1206 19:17:32.187736   86706 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:17:32.192237   86706 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  6 18:41 /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:17:32.192267   86706 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  6 18:41 /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:17:32.192311   86706 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:17:32.197665   86706 command_runner.go:130] > b5213941
	I1206 19:17:32.197932   86706 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1206 19:17:32.206626   86706 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1206 19:17:32.210521   86706 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1206 19:17:32.210663   86706 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1206 19:17:32.210796   86706 ssh_runner.go:195] Run: crio config
	I1206 19:17:32.262493   86706 command_runner.go:130] ! time="2023-12-06 19:17:32.255093488Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1206 19:17:32.262558   86706 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1206 19:17:32.271659   86706 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1206 19:17:32.271689   86706 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1206 19:17:32.271696   86706 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1206 19:17:32.271700   86706 command_runner.go:130] > #
	I1206 19:17:32.271709   86706 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1206 19:17:32.271715   86706 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1206 19:17:32.271725   86706 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1206 19:17:32.271743   86706 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1206 19:17:32.271757   86706 command_runner.go:130] > # reload'.
	I1206 19:17:32.271767   86706 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1206 19:17:32.271777   86706 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1206 19:17:32.271793   86706 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1206 19:17:32.271799   86706 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1206 19:17:32.271805   86706 command_runner.go:130] > [crio]
	I1206 19:17:32.271817   86706 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1206 19:17:32.271829   86706 command_runner.go:130] > # containers images, in this directory.
	I1206 19:17:32.271842   86706 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1206 19:17:32.271860   86706 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1206 19:17:32.271872   86706 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1206 19:17:32.271883   86706 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1206 19:17:32.271897   86706 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1206 19:17:32.271908   86706 command_runner.go:130] > storage_driver = "overlay"
	I1206 19:17:32.271921   86706 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1206 19:17:32.271934   86706 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1206 19:17:32.271945   86706 command_runner.go:130] > storage_option = [
	I1206 19:17:32.271956   86706 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1206 19:17:32.271965   86706 command_runner.go:130] > ]
	I1206 19:17:32.271977   86706 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1206 19:17:32.271987   86706 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1206 19:17:32.271999   86706 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1206 19:17:32.272011   86706 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1206 19:17:32.272023   86706 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1206 19:17:32.272034   86706 command_runner.go:130] > # always happen on a node reboot
	I1206 19:17:32.272044   86706 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1206 19:17:32.272057   86706 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1206 19:17:32.272070   86706 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1206 19:17:32.272086   86706 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1206 19:17:32.272097   86706 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1206 19:17:32.272113   86706 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1206 19:17:32.272127   86706 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1206 19:17:32.272133   86706 command_runner.go:130] > # internal_wipe = true
	I1206 19:17:32.272143   86706 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1206 19:17:32.272157   86706 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1206 19:17:32.272171   86706 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1206 19:17:32.272183   86706 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1206 19:17:32.272196   86706 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1206 19:17:32.272206   86706 command_runner.go:130] > [crio.api]
	I1206 19:17:32.272214   86706 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1206 19:17:32.272225   86706 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1206 19:17:32.272236   86706 command_runner.go:130] > # IP address on which the stream server will listen.
	I1206 19:17:32.272243   86706 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1206 19:17:32.272257   86706 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1206 19:17:32.272270   86706 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1206 19:17:32.272277   86706 command_runner.go:130] > # stream_port = "0"
	I1206 19:17:32.272286   86706 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1206 19:17:32.272292   86706 command_runner.go:130] > # stream_enable_tls = false
	I1206 19:17:32.272302   86706 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1206 19:17:32.272308   86706 command_runner.go:130] > # stream_idle_timeout = ""
	I1206 19:17:32.272317   86706 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1206 19:17:32.272327   86706 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1206 19:17:32.272334   86706 command_runner.go:130] > # minutes.
	I1206 19:17:32.272341   86706 command_runner.go:130] > # stream_tls_cert = ""
	I1206 19:17:32.272358   86706 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1206 19:17:32.272368   86706 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1206 19:17:32.272378   86706 command_runner.go:130] > # stream_tls_key = ""
	I1206 19:17:32.272387   86706 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1206 19:17:32.272401   86706 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1206 19:17:32.272412   86706 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1206 19:17:32.272421   86706 command_runner.go:130] > # stream_tls_ca = ""
	I1206 19:17:32.272433   86706 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1206 19:17:32.272444   86706 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1206 19:17:32.272455   86706 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1206 19:17:32.272466   86706 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1206 19:17:32.272495   86706 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1206 19:17:32.272510   86706 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1206 19:17:32.272517   86706 command_runner.go:130] > [crio.runtime]
	I1206 19:17:32.272529   86706 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1206 19:17:32.272541   86706 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1206 19:17:32.272551   86706 command_runner.go:130] > # "nofile=1024:2048"
	I1206 19:17:32.272566   86706 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1206 19:17:32.272576   86706 command_runner.go:130] > # default_ulimits = [
	I1206 19:17:32.272586   86706 command_runner.go:130] > # ]
	I1206 19:17:32.272600   86706 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1206 19:17:32.272609   86706 command_runner.go:130] > # no_pivot = false
	I1206 19:17:32.272621   86706 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1206 19:17:32.272634   86706 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1206 19:17:32.272645   86706 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1206 19:17:32.272655   86706 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1206 19:17:32.272666   86706 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1206 19:17:32.272680   86706 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1206 19:17:32.272691   86706 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1206 19:17:32.272703   86706 command_runner.go:130] > # Cgroup setting for conmon
	I1206 19:17:32.272716   86706 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1206 19:17:32.272726   86706 command_runner.go:130] > conmon_cgroup = "pod"
	I1206 19:17:32.272739   86706 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1206 19:17:32.272750   86706 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1206 19:17:32.272764   86706 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1206 19:17:32.272774   86706 command_runner.go:130] > conmon_env = [
	I1206 19:17:32.272793   86706 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1206 19:17:32.272802   86706 command_runner.go:130] > ]
	I1206 19:17:32.272811   86706 command_runner.go:130] > # Additional environment variables to set for all the
	I1206 19:17:32.272822   86706 command_runner.go:130] > # containers. These are overridden if set in the
	I1206 19:17:32.272835   86706 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1206 19:17:32.272845   86706 command_runner.go:130] > # default_env = [
	I1206 19:17:32.272854   86706 command_runner.go:130] > # ]
	I1206 19:17:32.272865   86706 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1206 19:17:32.272872   86706 command_runner.go:130] > # selinux = false
	I1206 19:17:32.272885   86706 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1206 19:17:32.272895   86706 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1206 19:17:32.272907   86706 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1206 19:17:32.272918   86706 command_runner.go:130] > # seccomp_profile = ""
	I1206 19:17:32.272929   86706 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1206 19:17:32.272938   86706 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1206 19:17:32.272951   86706 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1206 19:17:32.272962   86706 command_runner.go:130] > # which might increase security.
	I1206 19:17:32.272974   86706 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1206 19:17:32.272986   86706 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1206 19:17:32.272999   86706 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1206 19:17:32.273012   86706 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1206 19:17:32.273025   86706 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1206 19:17:32.273037   86706 command_runner.go:130] > # This option supports live configuration reload.
	I1206 19:17:32.273047   86706 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1206 19:17:32.273059   86706 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1206 19:17:32.273069   86706 command_runner.go:130] > # the cgroup blockio controller.
	I1206 19:17:32.273079   86706 command_runner.go:130] > # blockio_config_file = ""
	I1206 19:17:32.273092   86706 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1206 19:17:32.273101   86706 command_runner.go:130] > # irqbalance daemon.
	I1206 19:17:32.273110   86706 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1206 19:17:32.273125   86706 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1206 19:17:32.273136   86706 command_runner.go:130] > # This option supports live configuration reload.
	I1206 19:17:32.273146   86706 command_runner.go:130] > # rdt_config_file = ""
	I1206 19:17:32.273155   86706 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1206 19:17:32.273164   86706 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1206 19:17:32.273174   86706 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1206 19:17:32.273183   86706 command_runner.go:130] > # separate_pull_cgroup = ""
	I1206 19:17:32.273196   86706 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1206 19:17:32.273209   86706 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1206 19:17:32.273218   86706 command_runner.go:130] > # will be added.
	I1206 19:17:32.273225   86706 command_runner.go:130] > # default_capabilities = [
	I1206 19:17:32.273248   86706 command_runner.go:130] > # 	"CHOWN",
	I1206 19:17:32.273255   86706 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1206 19:17:32.273264   86706 command_runner.go:130] > # 	"FSETID",
	I1206 19:17:32.273271   86706 command_runner.go:130] > # 	"FOWNER",
	I1206 19:17:32.273278   86706 command_runner.go:130] > # 	"SETGID",
	I1206 19:17:32.273287   86706 command_runner.go:130] > # 	"SETUID",
	I1206 19:17:32.273294   86706 command_runner.go:130] > # 	"SETPCAP",
	I1206 19:17:32.273304   86706 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1206 19:17:32.273314   86706 command_runner.go:130] > # 	"KILL",
	I1206 19:17:32.273320   86706 command_runner.go:130] > # ]
	I1206 19:17:32.273333   86706 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1206 19:17:32.273346   86706 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1206 19:17:32.273357   86706 command_runner.go:130] > # default_sysctls = [
	I1206 19:17:32.273367   86706 command_runner.go:130] > # ]
	I1206 19:17:32.273374   86706 command_runner.go:130] > # List of devices on the host that a
	I1206 19:17:32.273386   86706 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1206 19:17:32.273394   86706 command_runner.go:130] > # allowed_devices = [
	I1206 19:17:32.273404   86706 command_runner.go:130] > # 	"/dev/fuse",
	I1206 19:17:32.273412   86706 command_runner.go:130] > # ]
	I1206 19:17:32.273422   86706 command_runner.go:130] > # List of additional devices. specified as
	I1206 19:17:32.273437   86706 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1206 19:17:32.273449   86706 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1206 19:17:32.273477   86706 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1206 19:17:32.273487   86706 command_runner.go:130] > # additional_devices = [
	I1206 19:17:32.273492   86706 command_runner.go:130] > # ]
	I1206 19:17:32.273501   86706 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1206 19:17:32.273510   86706 command_runner.go:130] > # cdi_spec_dirs = [
	I1206 19:17:32.273515   86706 command_runner.go:130] > # 	"/etc/cdi",
	I1206 19:17:32.273524   86706 command_runner.go:130] > # 	"/var/run/cdi",
	I1206 19:17:32.273530   86706 command_runner.go:130] > # ]
	I1206 19:17:32.273542   86706 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1206 19:17:32.273553   86706 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1206 19:17:32.273559   86706 command_runner.go:130] > # Defaults to false.
	I1206 19:17:32.273569   86706 command_runner.go:130] > # device_ownership_from_security_context = false
	I1206 19:17:32.273579   86706 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1206 19:17:32.273590   86706 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1206 19:17:32.273599   86706 command_runner.go:130] > # hooks_dir = [
	I1206 19:17:32.273606   86706 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1206 19:17:32.273615   86706 command_runner.go:130] > # ]
	I1206 19:17:32.273624   86706 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1206 19:17:32.273638   86706 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1206 19:17:32.273650   86706 command_runner.go:130] > # its default mounts from the following two files:
	I1206 19:17:32.273659   86706 command_runner.go:130] > #
	I1206 19:17:32.273669   86706 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1206 19:17:32.273682   86706 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1206 19:17:32.273694   86706 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1206 19:17:32.273702   86706 command_runner.go:130] > #
	I1206 19:17:32.273712   86706 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1206 19:17:32.273726   86706 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1206 19:17:32.273740   86706 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1206 19:17:32.273753   86706 command_runner.go:130] > #      only add mounts it finds in this file.
	I1206 19:17:32.273762   86706 command_runner.go:130] > #
	I1206 19:17:32.273770   86706 command_runner.go:130] > # default_mounts_file = ""
	I1206 19:17:32.273786   86706 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1206 19:17:32.273800   86706 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1206 19:17:32.273809   86706 command_runner.go:130] > pids_limit = 1024
	I1206 19:17:32.273819   86706 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1206 19:17:32.273831   86706 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1206 19:17:32.273845   86706 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1206 19:17:32.273860   86706 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1206 19:17:32.273870   86706 command_runner.go:130] > # log_size_max = -1
	I1206 19:17:32.273881   86706 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1206 19:17:32.273891   86706 command_runner.go:130] > # log_to_journald = false
	I1206 19:17:32.273901   86706 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1206 19:17:32.273912   86706 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1206 19:17:32.273923   86706 command_runner.go:130] > # Path to directory for container attach sockets.
	I1206 19:17:32.273934   86706 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1206 19:17:32.273943   86706 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1206 19:17:32.273951   86706 command_runner.go:130] > # bind_mount_prefix = ""
	I1206 19:17:32.273962   86706 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1206 19:17:32.273972   86706 command_runner.go:130] > # read_only = false
	I1206 19:17:32.273982   86706 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1206 19:17:32.273993   86706 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1206 19:17:32.274004   86706 command_runner.go:130] > # live configuration reload.
	I1206 19:17:32.274010   86706 command_runner.go:130] > # log_level = "info"
	I1206 19:17:32.274022   86706 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1206 19:17:32.274033   86706 command_runner.go:130] > # This option supports live configuration reload.
	I1206 19:17:32.274043   86706 command_runner.go:130] > # log_filter = ""
	I1206 19:17:32.274053   86706 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1206 19:17:32.274065   86706 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1206 19:17:32.274072   86706 command_runner.go:130] > # separated by comma.
	I1206 19:17:32.274080   86706 command_runner.go:130] > # uid_mappings = ""
	I1206 19:17:32.274092   86706 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1206 19:17:32.274106   86706 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1206 19:17:32.274117   86706 command_runner.go:130] > # separated by comma.
	I1206 19:17:32.274128   86706 command_runner.go:130] > # gid_mappings = ""
	I1206 19:17:32.274140   86706 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1206 19:17:32.274153   86706 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1206 19:17:32.274163   86706 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1206 19:17:32.274173   86706 command_runner.go:130] > # minimum_mappable_uid = -1
	I1206 19:17:32.274185   86706 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1206 19:17:32.274198   86706 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1206 19:17:32.274210   86706 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1206 19:17:32.274221   86706 command_runner.go:130] > # minimum_mappable_gid = -1
	I1206 19:17:32.274234   86706 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1206 19:17:32.274246   86706 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1206 19:17:32.274258   86706 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1206 19:17:32.274265   86706 command_runner.go:130] > # ctr_stop_timeout = 30
	I1206 19:17:32.274274   86706 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1206 19:17:32.274282   86706 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1206 19:17:32.274289   86706 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1206 19:17:32.274294   86706 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1206 19:17:32.274305   86706 command_runner.go:130] > drop_infra_ctr = false
	I1206 19:17:32.274313   86706 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1206 19:17:32.274321   86706 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1206 19:17:32.274331   86706 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1206 19:17:32.274338   86706 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1206 19:17:32.274344   86706 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1206 19:17:32.274351   86706 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1206 19:17:32.274356   86706 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1206 19:17:32.274362   86706 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1206 19:17:32.274369   86706 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1206 19:17:32.274375   86706 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1206 19:17:32.274383   86706 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1206 19:17:32.274391   86706 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1206 19:17:32.274398   86706 command_runner.go:130] > # default_runtime = "runc"
	I1206 19:17:32.274404   86706 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1206 19:17:32.274413   86706 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1206 19:17:32.274424   86706 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1206 19:17:32.274431   86706 command_runner.go:130] > # creation as a file is not desired either.
	I1206 19:17:32.274439   86706 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1206 19:17:32.274446   86706 command_runner.go:130] > # the hostname is being managed dynamically.
	I1206 19:17:32.274451   86706 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1206 19:17:32.274457   86706 command_runner.go:130] > # ]
	I1206 19:17:32.274463   86706 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1206 19:17:32.274471   86706 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1206 19:17:32.274480   86706 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1206 19:17:32.274488   86706 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1206 19:17:32.274493   86706 command_runner.go:130] > #
	I1206 19:17:32.274498   86706 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1206 19:17:32.274505   86706 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1206 19:17:32.274511   86706 command_runner.go:130] > #  runtime_type = "oci"
	I1206 19:17:32.274516   86706 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1206 19:17:32.274524   86706 command_runner.go:130] > #  privileged_without_host_devices = false
	I1206 19:17:32.274530   86706 command_runner.go:130] > #  allowed_annotations = []
	I1206 19:17:32.274534   86706 command_runner.go:130] > # Where:
	I1206 19:17:32.274541   86706 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1206 19:17:32.274549   86706 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1206 19:17:32.274557   86706 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1206 19:17:32.274565   86706 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1206 19:17:32.274571   86706 command_runner.go:130] > #   in $PATH.
	I1206 19:17:32.274577   86706 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1206 19:17:32.274584   86706 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1206 19:17:32.274590   86706 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1206 19:17:32.274596   86706 command_runner.go:130] > #   state.
	I1206 19:17:32.274602   86706 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1206 19:17:32.274610   86706 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1206 19:17:32.274617   86706 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1206 19:17:32.274624   86706 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1206 19:17:32.274632   86706 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1206 19:17:32.274639   86706 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1206 19:17:32.274646   86706 command_runner.go:130] > #   The currently recognized values are:
	I1206 19:17:32.274652   86706 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1206 19:17:32.274661   86706 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1206 19:17:32.274669   86706 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1206 19:17:32.274675   86706 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1206 19:17:32.274685   86706 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1206 19:17:32.274693   86706 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1206 19:17:32.274702   86706 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1206 19:17:32.274710   86706 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1206 19:17:32.274717   86706 command_runner.go:130] > #   should be moved to the container's cgroup
	I1206 19:17:32.274722   86706 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1206 19:17:32.274728   86706 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1206 19:17:32.274733   86706 command_runner.go:130] > runtime_type = "oci"
	I1206 19:17:32.274739   86706 command_runner.go:130] > runtime_root = "/run/runc"
	I1206 19:17:32.274743   86706 command_runner.go:130] > runtime_config_path = ""
	I1206 19:17:32.274750   86706 command_runner.go:130] > monitor_path = ""
	I1206 19:17:32.274754   86706 command_runner.go:130] > monitor_cgroup = ""
	I1206 19:17:32.274760   86706 command_runner.go:130] > monitor_exec_cgroup = ""
	I1206 19:17:32.274766   86706 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1206 19:17:32.274772   86706 command_runner.go:130] > # running containers
	I1206 19:17:32.274776   86706 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1206 19:17:32.274789   86706 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1206 19:17:32.274816   86706 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1206 19:17:32.274824   86706 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1206 19:17:32.274831   86706 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1206 19:17:32.274836   86706 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1206 19:17:32.274843   86706 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1206 19:17:32.274847   86706 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1206 19:17:32.274854   86706 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1206 19:17:32.274859   86706 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1206 19:17:32.274867   86706 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1206 19:17:32.274875   86706 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1206 19:17:32.274881   86706 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1206 19:17:32.274890   86706 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1206 19:17:32.274900   86706 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1206 19:17:32.274907   86706 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1206 19:17:32.274919   86706 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1206 19:17:32.274928   86706 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1206 19:17:32.274936   86706 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1206 19:17:32.274943   86706 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1206 19:17:32.274949   86706 command_runner.go:130] > # Example:
	I1206 19:17:32.274954   86706 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1206 19:17:32.274961   86706 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1206 19:17:32.274968   86706 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1206 19:17:32.274976   86706 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1206 19:17:32.274982   86706 command_runner.go:130] > # cpuset = 0
	I1206 19:17:32.274986   86706 command_runner.go:130] > # cpushares = "0-1"
	I1206 19:17:32.274992   86706 command_runner.go:130] > # Where:
	I1206 19:17:32.274996   86706 command_runner.go:130] > # The workload name is workload-type.
	I1206 19:17:32.275005   86706 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1206 19:17:32.275013   86706 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1206 19:17:32.275020   86706 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1206 19:17:32.275028   86706 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1206 19:17:32.275036   86706 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1206 19:17:32.275040   86706 command_runner.go:130] > # 
	I1206 19:17:32.275046   86706 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1206 19:17:32.275052   86706 command_runner.go:130] > #
	I1206 19:17:32.275058   86706 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1206 19:17:32.275066   86706 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1206 19:17:32.275074   86706 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1206 19:17:32.275083   86706 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1206 19:17:32.275091   86706 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1206 19:17:32.275097   86706 command_runner.go:130] > [crio.image]
	I1206 19:17:32.275103   86706 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1206 19:17:32.275110   86706 command_runner.go:130] > # default_transport = "docker://"
	I1206 19:17:32.275116   86706 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1206 19:17:32.275124   86706 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1206 19:17:32.275130   86706 command_runner.go:130] > # global_auth_file = ""
	I1206 19:17:32.275135   86706 command_runner.go:130] > # The image used to instantiate infra containers.
	I1206 19:17:32.275142   86706 command_runner.go:130] > # This option supports live configuration reload.
	I1206 19:17:32.275147   86706 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1206 19:17:32.275158   86706 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1206 19:17:32.275165   86706 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1206 19:17:32.275172   86706 command_runner.go:130] > # This option supports live configuration reload.
	I1206 19:17:32.275177   86706 command_runner.go:130] > # pause_image_auth_file = ""
	I1206 19:17:32.275184   86706 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1206 19:17:32.275193   86706 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1206 19:17:32.275202   86706 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1206 19:17:32.275211   86706 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1206 19:17:32.275217   86706 command_runner.go:130] > # pause_command = "/pause"
	I1206 19:17:32.275223   86706 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1206 19:17:32.275231   86706 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1206 19:17:32.275240   86706 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1206 19:17:32.275247   86706 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1206 19:17:32.275254   86706 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1206 19:17:32.275261   86706 command_runner.go:130] > # signature_policy = ""
	I1206 19:17:32.275267   86706 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1206 19:17:32.275275   86706 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1206 19:17:32.275280   86706 command_runner.go:130] > # changing them here.
	I1206 19:17:32.275284   86706 command_runner.go:130] > # insecure_registries = [
	I1206 19:17:32.275289   86706 command_runner.go:130] > # ]
	I1206 19:17:32.275298   86706 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1206 19:17:32.275305   86706 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1206 19:17:32.275309   86706 command_runner.go:130] > # image_volumes = "mkdir"
	I1206 19:17:32.275317   86706 command_runner.go:130] > # Temporary directory to use for storing big files
	I1206 19:17:32.275321   86706 command_runner.go:130] > # big_files_temporary_dir = ""
	I1206 19:17:32.275329   86706 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1206 19:17:32.275335   86706 command_runner.go:130] > # CNI plugins.
	I1206 19:17:32.275339   86706 command_runner.go:130] > [crio.network]
	I1206 19:17:32.275347   86706 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1206 19:17:32.275355   86706 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1206 19:17:32.275359   86706 command_runner.go:130] > # cni_default_network = ""
	I1206 19:17:32.275367   86706 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1206 19:17:32.275373   86706 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1206 19:17:32.275379   86706 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1206 19:17:32.275385   86706 command_runner.go:130] > # plugin_dirs = [
	I1206 19:17:32.275390   86706 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1206 19:17:32.275395   86706 command_runner.go:130] > # ]
	I1206 19:17:32.275401   86706 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1206 19:17:32.275408   86706 command_runner.go:130] > [crio.metrics]
	I1206 19:17:32.275416   86706 command_runner.go:130] > # Globally enable or disable metrics support.
	I1206 19:17:32.275420   86706 command_runner.go:130] > enable_metrics = true
	I1206 19:17:32.275427   86706 command_runner.go:130] > # Specify enabled metrics collectors.
	I1206 19:17:32.275432   86706 command_runner.go:130] > # Per default all metrics are enabled.
	I1206 19:17:32.275440   86706 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1206 19:17:32.275448   86706 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1206 19:17:32.275455   86706 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1206 19:17:32.275462   86706 command_runner.go:130] > # metrics_collectors = [
	I1206 19:17:32.275466   86706 command_runner.go:130] > # 	"operations",
	I1206 19:17:32.275473   86706 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1206 19:17:32.275478   86706 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1206 19:17:32.275485   86706 command_runner.go:130] > # 	"operations_errors",
	I1206 19:17:32.275490   86706 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1206 19:17:32.275496   86706 command_runner.go:130] > # 	"image_pulls_by_name",
	I1206 19:17:32.275500   86706 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1206 19:17:32.275507   86706 command_runner.go:130] > # 	"image_pulls_failures",
	I1206 19:17:32.275511   86706 command_runner.go:130] > # 	"image_pulls_successes",
	I1206 19:17:32.275517   86706 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1206 19:17:32.275522   86706 command_runner.go:130] > # 	"image_layer_reuse",
	I1206 19:17:32.275528   86706 command_runner.go:130] > # 	"containers_oom_total",
	I1206 19:17:32.275533   86706 command_runner.go:130] > # 	"containers_oom",
	I1206 19:17:32.275539   86706 command_runner.go:130] > # 	"processes_defunct",
	I1206 19:17:32.275543   86706 command_runner.go:130] > # 	"operations_total",
	I1206 19:17:32.275550   86706 command_runner.go:130] > # 	"operations_latency_seconds",
	I1206 19:17:32.275555   86706 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1206 19:17:32.275562   86706 command_runner.go:130] > # 	"operations_errors_total",
	I1206 19:17:32.275566   86706 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1206 19:17:32.275573   86706 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1206 19:17:32.275578   86706 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1206 19:17:32.275584   86706 command_runner.go:130] > # 	"image_pulls_success_total",
	I1206 19:17:32.275589   86706 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1206 19:17:32.275595   86706 command_runner.go:130] > # 	"containers_oom_count_total",
	I1206 19:17:32.275599   86706 command_runner.go:130] > # ]
	I1206 19:17:32.275606   86706 command_runner.go:130] > # The port on which the metrics server will listen.
	I1206 19:17:32.275613   86706 command_runner.go:130] > # metrics_port = 9090
	I1206 19:17:32.275618   86706 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1206 19:17:32.275624   86706 command_runner.go:130] > # metrics_socket = ""
	I1206 19:17:32.275629   86706 command_runner.go:130] > # The certificate for the secure metrics server.
	I1206 19:17:32.275638   86706 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1206 19:17:32.275647   86706 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1206 19:17:32.275654   86706 command_runner.go:130] > # certificate on any modification event.
	I1206 19:17:32.275658   86706 command_runner.go:130] > # metrics_cert = ""
	I1206 19:17:32.275665   86706 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1206 19:17:32.275672   86706 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1206 19:17:32.275676   86706 command_runner.go:130] > # metrics_key = ""
	I1206 19:17:32.275682   86706 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1206 19:17:32.275688   86706 command_runner.go:130] > [crio.tracing]
	I1206 19:17:32.275694   86706 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1206 19:17:32.275700   86706 command_runner.go:130] > # enable_tracing = false
	I1206 19:17:32.275706   86706 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1206 19:17:32.275712   86706 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1206 19:17:32.275717   86706 command_runner.go:130] > # Number of samples to collect per million spans.
	I1206 19:17:32.275724   86706 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1206 19:17:32.275730   86706 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1206 19:17:32.275737   86706 command_runner.go:130] > [crio.stats]
	I1206 19:17:32.275742   86706 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1206 19:17:32.275750   86706 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1206 19:17:32.275764   86706 command_runner.go:130] > # stats_collection_period = 0
	I1206 19:17:32.275849   86706 cni.go:84] Creating CNI manager for ""
	I1206 19:17:32.275859   86706 cni.go:136] 3 nodes found, recommending kindnet
	I1206 19:17:32.275868   86706 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1206 19:17:32.275909   86706 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.194 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-593099 NodeName:multinode-593099-m03 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.125"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.194 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 19:17:32.276040   86706 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.194
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-593099-m03"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.194
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.125"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 19:17:32.276110   86706 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-593099-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.194
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-593099 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1206 19:17:32.276182   86706 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1206 19:17:32.286330   86706 command_runner.go:130] > kubeadm
	I1206 19:17:32.286352   86706 command_runner.go:130] > kubectl
	I1206 19:17:32.286356   86706 command_runner.go:130] > kubelet
	I1206 19:17:32.286375   86706 binaries.go:44] Found k8s binaries, skipping transfer
	I1206 19:17:32.286437   86706 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1206 19:17:32.295627   86706 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1206 19:17:32.313393   86706 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 19:17:32.329744   86706 ssh_runner.go:195] Run: grep 192.168.39.125	control-plane.minikube.internal$ /etc/hosts
	I1206 19:17:32.333373   86706 command_runner.go:130] > 192.168.39.125	control-plane.minikube.internal
	I1206 19:17:32.333664   86706 host.go:66] Checking if "multinode-593099" exists ...
	I1206 19:17:32.333916   86706 config.go:182] Loaded profile config "multinode-593099": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 19:17:32.334018   86706 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:17:32.334062   86706 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:17:32.348939   86706 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37201
	I1206 19:17:32.349479   86706 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:17:32.350047   86706 main.go:141] libmachine: Using API Version  1
	I1206 19:17:32.350067   86706 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:17:32.350618   86706 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:17:32.350853   86706 main.go:141] libmachine: (multinode-593099) Calling .DriverName
	I1206 19:17:32.351026   86706 start.go:304] JoinCluster: &{Name:multinode-593099 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-593099 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.6 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.194 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 19:17:32.351177   86706 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1206 19:17:32.351199   86706 main.go:141] libmachine: (multinode-593099) Calling .GetSSHHostname
	I1206 19:17:32.354179   86706 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:17:32.354640   86706 main.go:141] libmachine: (multinode-593099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:c6", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:13:22 +0000 UTC Type:0 Mac:52:54:00:37:16:c6 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:multinode-593099 Clientid:01:52:54:00:37:16:c6}
	I1206 19:17:32.354660   86706 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined IP address 192.168.39.125 and MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:17:32.354868   86706 main.go:141] libmachine: (multinode-593099) Calling .GetSSHPort
	I1206 19:17:32.355038   86706 main.go:141] libmachine: (multinode-593099) Calling .GetSSHKeyPath
	I1206 19:17:32.355169   86706 main.go:141] libmachine: (multinode-593099) Calling .GetSSHUsername
	I1206 19:17:32.355305   86706 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/multinode-593099/id_rsa Username:docker}
	I1206 19:17:32.528397   86706 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token pljvoo.606fh4wo921gby7i --discovery-token-ca-cert-hash sha256:3173d0205a58c67077c3594ee458dc14bc41fcece32682cbfd9ea1126e12b817 
	I1206 19:17:32.528779   86706 start.go:317] removing existing worker node "m03" before attempting to rejoin cluster: &{Name:m03 IP:192.168.39.194 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I1206 19:17:32.528825   86706 host.go:66] Checking if "multinode-593099" exists ...
	I1206 19:17:32.529131   86706 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:17:32.529180   86706 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:17:32.543855   86706 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33035
	I1206 19:17:32.544247   86706 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:17:32.544725   86706 main.go:141] libmachine: Using API Version  1
	I1206 19:17:32.544746   86706 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:17:32.545051   86706 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:17:32.545352   86706 main.go:141] libmachine: (multinode-593099) Calling .DriverName
	I1206 19:17:32.545554   86706 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-593099-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I1206 19:17:32.545585   86706 main.go:141] libmachine: (multinode-593099) Calling .GetSSHHostname
	I1206 19:17:32.548235   86706 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:17:32.548599   86706 main.go:141] libmachine: (multinode-593099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:c6", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:13:22 +0000 UTC Type:0 Mac:52:54:00:37:16:c6 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:multinode-593099 Clientid:01:52:54:00:37:16:c6}
	I1206 19:17:32.548624   86706 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined IP address 192.168.39.125 and MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:17:32.548748   86706 main.go:141] libmachine: (multinode-593099) Calling .GetSSHPort
	I1206 19:17:32.548930   86706 main.go:141] libmachine: (multinode-593099) Calling .GetSSHKeyPath
	I1206 19:17:32.549114   86706 main.go:141] libmachine: (multinode-593099) Calling .GetSSHUsername
	I1206 19:17:32.549258   86706 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/multinode-593099/id_rsa Username:docker}
	I1206 19:17:32.735843   86706 command_runner.go:130] > node/multinode-593099-m03 cordoned
	I1206 19:17:35.787387   86706 command_runner.go:130] > pod "busybox-5bc68d56bd-5d8qw" has DeletionTimestamp older than 1 seconds, skipping
	I1206 19:17:35.787423   86706 command_runner.go:130] > node/multinode-593099-m03 drained
	I1206 19:17:35.789227   86706 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I1206 19:17:35.789262   86706 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-mbkkj, kube-system/kube-proxy-tp2wm
	I1206 19:17:35.789285   86706 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-593099-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.243704988s)
	I1206 19:17:35.789296   86706 node.go:108] successfully drained node "m03"
	I1206 19:17:35.789839   86706 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17740-63652/kubeconfig
	I1206 19:17:35.790170   86706 kapi.go:59] client config for multinode-593099: &rest.Config{Host:"https://192.168.39.125:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/client.crt", KeyFile:"/home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/client.key", CAFile:"/home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 19:17:35.790581   86706 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I1206 19:17:35.790627   86706 round_trippers.go:463] DELETE https://192.168.39.125:8443/api/v1/nodes/multinode-593099-m03
	I1206 19:17:35.790633   86706 round_trippers.go:469] Request Headers:
	I1206 19:17:35.790642   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:17:35.790649   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:17:35.790656   86706 round_trippers.go:473]     Content-Type: application/json
	I1206 19:17:35.803267   86706 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1206 19:17:35.803289   86706 round_trippers.go:577] Response Headers:
	I1206 19:17:35.803297   86706 round_trippers.go:580]     Content-Length: 171
	I1206 19:17:35.803302   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:17:35 GMT
	I1206 19:17:35.803310   86706 round_trippers.go:580]     Audit-Id: 1205f119-4386-4f97-8c39-95bc28c15d92
	I1206 19:17:35.803315   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:17:35.803321   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:17:35.803326   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:17:35.803331   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:17:35.803358   86706 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-593099-m03","kind":"nodes","uid":"a37befac-9ea6-49a7-a8c3-a9b16981befa"}}
	I1206 19:17:35.803395   86706 node.go:124] successfully deleted node "m03"
	I1206 19:17:35.803410   86706 start.go:321] successfully removed existing worker node "m03" from cluster: &{Name:m03 IP:192.168.39.194 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I1206 19:17:35.803430   86706 start.go:325] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.39.194 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I1206 19:17:35.803455   86706 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token pljvoo.606fh4wo921gby7i --discovery-token-ca-cert-hash sha256:3173d0205a58c67077c3594ee458dc14bc41fcece32682cbfd9ea1126e12b817 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-593099-m03"
	I1206 19:17:35.918982   86706 command_runner.go:130] ! W1206 19:17:35.911404    2378 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1206 19:17:35.919014   86706 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1206 19:17:36.146434   86706 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1206 19:17:36.146497   86706 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1206 19:17:36.953611   86706 command_runner.go:130] > [preflight] Running pre-flight checks
	I1206 19:17:36.953634   86706 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1206 19:17:36.953644   86706 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1206 19:17:36.953653   86706 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 19:17:36.953665   86706 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 19:17:36.953674   86706 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1206 19:17:36.953684   86706 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1206 19:17:36.953697   86706 command_runner.go:130] > This node has joined the cluster:
	I1206 19:17:36.953708   86706 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1206 19:17:36.953714   86706 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1206 19:17:36.953721   86706 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1206 19:17:36.953912   86706 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token pljvoo.606fh4wo921gby7i --discovery-token-ca-cert-hash sha256:3173d0205a58c67077c3594ee458dc14bc41fcece32682cbfd9ea1126e12b817 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-593099-m03": (1.150421756s)
	I1206 19:17:36.953942   86706 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1206 19:17:37.232798   86706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=31a3600ce72029d920a55140bbc6d0705e357503 minikube.k8s.io/name=multinode-593099 minikube.k8s.io/updated_at=2023_12_06T19_17_37_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:17:37.333296   86706 command_runner.go:130] > node/multinode-593099-m02 labeled
	I1206 19:17:37.342137   86706 command_runner.go:130] > node/multinode-593099-m03 labeled
	I1206 19:17:37.344522   86706 start.go:306] JoinCluster complete in 4.993488729s
	I1206 19:17:37.344544   86706 cni.go:84] Creating CNI manager for ""
	I1206 19:17:37.344551   86706 cni.go:136] 3 nodes found, recommending kindnet
	I1206 19:17:37.344597   86706 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1206 19:17:37.349993   86706 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1206 19:17:37.350023   86706 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1206 19:17:37.350029   86706 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1206 19:17:37.350036   86706 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1206 19:17:37.350042   86706 command_runner.go:130] > Access: 2023-12-06 19:13:22.670512873 +0000
	I1206 19:17:37.350047   86706 command_runner.go:130] > Modify: 2023-12-01 05:15:19.000000000 +0000
	I1206 19:17:37.350053   86706 command_runner.go:130] > Change: 2023-12-06 19:13:20.668512873 +0000
	I1206 19:17:37.350063   86706 command_runner.go:130] >  Birth: -
	I1206 19:17:37.350207   86706 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1206 19:17:37.350227   86706 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1206 19:17:37.369505   86706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1206 19:17:37.704901   86706 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1206 19:17:37.709083   86706 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1206 19:17:37.712539   86706 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1206 19:17:37.726464   86706 command_runner.go:130] > daemonset.apps/kindnet configured
	I1206 19:17:37.731739   86706 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17740-63652/kubeconfig
	I1206 19:17:37.731998   86706 kapi.go:59] client config for multinode-593099: &rest.Config{Host:"https://192.168.39.125:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/client.crt", KeyFile:"/home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/client.key", CAFile:"/home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 19:17:37.732378   86706 round_trippers.go:463] GET https://192.168.39.125:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1206 19:17:37.732394   86706 round_trippers.go:469] Request Headers:
	I1206 19:17:37.732407   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:17:37.732415   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:17:37.735009   86706 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:17:37.735026   86706 round_trippers.go:577] Response Headers:
	I1206 19:17:37.735032   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:17:37 GMT
	I1206 19:17:37.735037   86706 round_trippers.go:580]     Audit-Id: 3bca06fd-dc5a-4ddd-b315-bf9851c7297d
	I1206 19:17:37.735042   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:17:37.735047   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:17:37.735052   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:17:37.735057   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:17:37.735063   86706 round_trippers.go:580]     Content-Length: 291
	I1206 19:17:37.735161   86706 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"914591c0-c4d9-4bf1-b4d5-c7cbc3153364","resourceVersion":"841","creationTimestamp":"2023-12-06T19:03:30Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1206 19:17:37.735247   86706 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-593099" context rescaled to 1 replicas
	I1206 19:17:37.735274   86706 start.go:223] Will wait 6m0s for node &{Name:m03 IP:192.168.39.194 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I1206 19:17:37.737310   86706 out.go:177] * Verifying Kubernetes components...
	I1206 19:17:37.738708   86706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 19:17:37.754093   86706 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17740-63652/kubeconfig
	I1206 19:17:37.754343   86706 kapi.go:59] client config for multinode-593099: &rest.Config{Host:"https://192.168.39.125:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/client.crt", KeyFile:"/home/jenkins/minikube-integration/17740-63652/.minikube/profiles/multinode-593099/client.key", CAFile:"/home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 19:17:37.754623   86706 node_ready.go:35] waiting up to 6m0s for node "multinode-593099-m03" to be "Ready" ...
	I1206 19:17:37.754701   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099-m03
	I1206 19:17:37.754712   86706 round_trippers.go:469] Request Headers:
	I1206 19:17:37.754723   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:17:37.754736   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:17:37.757471   86706 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:17:37.757491   86706 round_trippers.go:577] Response Headers:
	I1206 19:17:37.757500   86706 round_trippers.go:580]     Audit-Id: ba7cc795-851c-4664-bc93-f889990c334d
	I1206 19:17:37.757507   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:17:37.757514   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:17:37.757522   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:17:37.757529   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:17:37.757537   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:17:37 GMT
	I1206 19:17:37.757999   86706 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099-m03","uid":"8ba2bdfd-d110-43fd-b33e-bd8e5a71e7b5","resourceVersion":"1183","creationTimestamp":"2023-12-06T19:17:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_06T19_17_37_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:17:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3994 chars]
	I1206 19:17:37.758268   86706 node_ready.go:49] node "multinode-593099-m03" has status "Ready":"True"
	I1206 19:17:37.758282   86706 node_ready.go:38] duration metric: took 3.642488ms waiting for node "multinode-593099-m03" to be "Ready" ...
	I1206 19:17:37.758293   86706 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 19:17:37.758349   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods
	I1206 19:17:37.758359   86706 round_trippers.go:469] Request Headers:
	I1206 19:17:37.758366   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:17:37.758372   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:17:37.762174   86706 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1206 19:17:37.762203   86706 round_trippers.go:577] Response Headers:
	I1206 19:17:37.762213   86706 round_trippers.go:580]     Audit-Id: 1684d716-2930-4452-9240-8aa85013a7c4
	I1206 19:17:37.762221   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:17:37.762229   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:17:37.762236   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:17:37.762247   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:17:37.762258   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:17:37 GMT
	I1206 19:17:37.763065   86706 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1187"},"items":[{"metadata":{"name":"coredns-5dd5756b68-h6rcq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"85247dde-4cee-482e-8f9b-a9e8f4e7172e","resourceVersion":"828","creationTimestamp":"2023-12-06T19:03:43Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d4bc00ef-7482-4e80-b416-7475ddc04c5d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4bc00ef-7482-4e80-b416-7475ddc04c5d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 82071 chars]
	I1206 19:17:37.765437   86706 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-h6rcq" in "kube-system" namespace to be "Ready" ...
	I1206 19:17:37.765511   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h6rcq
	I1206 19:17:37.765518   86706 round_trippers.go:469] Request Headers:
	I1206 19:17:37.765526   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:17:37.765532   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:17:37.767607   86706 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:17:37.767630   86706 round_trippers.go:577] Response Headers:
	I1206 19:17:37.767640   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:17:37.767647   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:17:37.767655   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:17:37.767663   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:17:37 GMT
	I1206 19:17:37.767673   86706 round_trippers.go:580]     Audit-Id: ecc1ed20-1340-400a-aead-177c30502814
	I1206 19:17:37.767686   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:17:37.767948   86706 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-h6rcq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"85247dde-4cee-482e-8f9b-a9e8f4e7172e","resourceVersion":"828","creationTimestamp":"2023-12-06T19:03:43Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d4bc00ef-7482-4e80-b416-7475ddc04c5d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4bc00ef-7482-4e80-b416-7475ddc04c5d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I1206 19:17:37.768333   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:17:37.768346   86706 round_trippers.go:469] Request Headers:
	I1206 19:17:37.768353   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:17:37.768362   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:17:37.770544   86706 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:17:37.770558   86706 round_trippers.go:577] Response Headers:
	I1206 19:17:37.770567   86706 round_trippers.go:580]     Audit-Id: d855c3df-7fcf-412a-b571-c1ac2a786527
	I1206 19:17:37.770579   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:17:37.770588   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:17:37.770600   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:17:37.770610   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:17:37.770623   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:17:37 GMT
	I1206 19:17:37.770890   86706 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"857","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1206 19:17:37.771203   86706 pod_ready.go:92] pod "coredns-5dd5756b68-h6rcq" in "kube-system" namespace has status "Ready":"True"
	I1206 19:17:37.771216   86706 pod_ready.go:81] duration metric: took 5.758498ms waiting for pod "coredns-5dd5756b68-h6rcq" in "kube-system" namespace to be "Ready" ...
	I1206 19:17:37.771231   86706 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-593099" in "kube-system" namespace to be "Ready" ...
	I1206 19:17:37.771290   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-593099
	I1206 19:17:37.771298   86706 round_trippers.go:469] Request Headers:
	I1206 19:17:37.771307   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:17:37.771320   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:17:37.773160   86706 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1206 19:17:37.773175   86706 round_trippers.go:577] Response Headers:
	I1206 19:17:37.773183   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:17:37.773191   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:17:37.773200   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:17:37 GMT
	I1206 19:17:37.773212   86706 round_trippers.go:580]     Audit-Id: 9a576050-e05f-41f0-9c01-ea36489caeb3
	I1206 19:17:37.773225   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:17:37.773245   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:17:37.773442   86706 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-593099","namespace":"kube-system","uid":"17573829-76f1-4718-80d6-248db178e8d0","resourceVersion":"848","creationTimestamp":"2023-12-06T19:03:29Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.125:2379","kubernetes.io/config.hash":"9ce14df981100c86a2ade94d91a33196","kubernetes.io/config.mirror":"9ce14df981100c86a2ade94d91a33196","kubernetes.io/config.seen":"2023-12-06T19:03:21.456077539Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I1206 19:17:37.773807   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:17:37.773820   86706 round_trippers.go:469] Request Headers:
	I1206 19:17:37.773827   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:17:37.773833   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:17:37.775721   86706 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1206 19:17:37.775738   86706 round_trippers.go:577] Response Headers:
	I1206 19:17:37.775746   86706 round_trippers.go:580]     Audit-Id: d6fd3029-0041-4afd-a113-b45437a1444b
	I1206 19:17:37.775754   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:17:37.775762   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:17:37.775778   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:17:37.775787   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:17:37.775800   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:17:37 GMT
	I1206 19:17:37.776111   86706 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"857","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1206 19:17:37.776368   86706 pod_ready.go:92] pod "etcd-multinode-593099" in "kube-system" namespace has status "Ready":"True"
	I1206 19:17:37.776382   86706 pod_ready.go:81] duration metric: took 5.139868ms waiting for pod "etcd-multinode-593099" in "kube-system" namespace to be "Ready" ...
	I1206 19:17:37.776397   86706 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-593099" in "kube-system" namespace to be "Ready" ...
	I1206 19:17:37.776443   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-593099
	I1206 19:17:37.776458   86706 round_trippers.go:469] Request Headers:
	I1206 19:17:37.776464   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:17:37.776471   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:17:37.778591   86706 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:17:37.778606   86706 round_trippers.go:577] Response Headers:
	I1206 19:17:37.778615   86706 round_trippers.go:580]     Audit-Id: a595ab95-47c0-4c37-a43b-813b5325fdc8
	I1206 19:17:37.778623   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:17:37.778631   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:17:37.778641   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:17:37.778651   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:17:37.778665   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:17:37 GMT
	I1206 19:17:37.779223   86706 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-593099","namespace":"kube-system","uid":"c32eea84-5395-4ffd-9fe4-51ae29b0861c","resourceVersion":"839","creationTimestamp":"2023-12-06T19:03:31Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.125:8443","kubernetes.io/config.hash":"6290493e5e32b3d1986ab88f381ba97f","kubernetes.io/config.mirror":"6290493e5e32b3d1986ab88f381ba97f","kubernetes.io/config.seen":"2023-12-06T19:03:30.652197401Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I1206 19:17:37.779559   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:17:37.779570   86706 round_trippers.go:469] Request Headers:
	I1206 19:17:37.779577   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:17:37.779583   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:17:37.781680   86706 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:17:37.781695   86706 round_trippers.go:577] Response Headers:
	I1206 19:17:37.781704   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:17:37 GMT
	I1206 19:17:37.781712   86706 round_trippers.go:580]     Audit-Id: c16abe44-69d0-4d23-96a5-474e9b0e9df9
	I1206 19:17:37.781721   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:17:37.781731   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:17:37.781744   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:17:37.781753   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:17:37.782095   86706 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"857","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1206 19:17:37.782381   86706 pod_ready.go:92] pod "kube-apiserver-multinode-593099" in "kube-system" namespace has status "Ready":"True"
	I1206 19:17:37.782397   86706 pod_ready.go:81] duration metric: took 5.988165ms waiting for pod "kube-apiserver-multinode-593099" in "kube-system" namespace to be "Ready" ...
	I1206 19:17:37.782408   86706 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-593099" in "kube-system" namespace to be "Ready" ...
	I1206 19:17:37.782455   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-593099
	I1206 19:17:37.782465   86706 round_trippers.go:469] Request Headers:
	I1206 19:17:37.782475   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:17:37.782485   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:17:37.784945   86706 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:17:37.784964   86706 round_trippers.go:577] Response Headers:
	I1206 19:17:37.784971   86706 round_trippers.go:580]     Audit-Id: fc93b336-323e-4118-bb3b-a5e937c2277f
	I1206 19:17:37.784976   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:17:37.784981   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:17:37.784987   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:17:37.784992   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:17:37.784999   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:17:37 GMT
	I1206 19:17:37.785169   86706 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-593099","namespace":"kube-system","uid":"bd10545f-240d-418a-b4ca-a48c978a56c9","resourceVersion":"826","creationTimestamp":"2023-12-06T19:03:31Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e0f1a77aff616164d10d488d27b08307","kubernetes.io/config.mirror":"e0f1a77aff616164d10d488d27b08307","kubernetes.io/config.seen":"2023-12-06T19:03:30.652198715Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I1206 19:17:37.785544   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:17:37.785559   86706 round_trippers.go:469] Request Headers:
	I1206 19:17:37.785570   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:17:37.785578   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:17:37.787406   86706 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1206 19:17:37.787422   86706 round_trippers.go:577] Response Headers:
	I1206 19:17:37.787428   86706 round_trippers.go:580]     Audit-Id: 32abfa32-ad25-4bf1-b9c7-7380277520e3
	I1206 19:17:37.787434   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:17:37.787442   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:17:37.787451   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:17:37.787460   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:17:37.787469   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:17:37 GMT
	I1206 19:17:37.787601   86706 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"857","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1206 19:17:37.787904   86706 pod_ready.go:92] pod "kube-controller-manager-multinode-593099" in "kube-system" namespace has status "Ready":"True"
	I1206 19:17:37.787920   86706 pod_ready.go:81] duration metric: took 5.504292ms waiting for pod "kube-controller-manager-multinode-593099" in "kube-system" namespace to be "Ready" ...
	I1206 19:17:37.787929   86706 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ggxmb" in "kube-system" namespace to be "Ready" ...
	I1206 19:17:37.955385   86706 request.go:629] Waited for 167.370569ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ggxmb
	I1206 19:17:37.955462   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ggxmb
	I1206 19:17:37.955473   86706 round_trippers.go:469] Request Headers:
	I1206 19:17:37.955488   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:17:37.955500   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:17:37.959225   86706 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1206 19:17:37.959250   86706 round_trippers.go:577] Response Headers:
	I1206 19:17:37.959260   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:17:37.959267   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:17:37 GMT
	I1206 19:17:37.959273   86706 round_trippers.go:580]     Audit-Id: 0f875f27-e433-4a3d-a2da-db939a8f4ec7
	I1206 19:17:37.959280   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:17:37.959288   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:17:37.959296   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:17:37.959466   86706 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ggxmb","generateName":"kube-proxy-","namespace":"kube-system","uid":"9967a10f-783d-4e8f-bb49-f609c7227307","resourceVersion":"1012","creationTimestamp":"2023-12-06T19:04:27Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"9bd0b244-d31b-4ce9-a395-f0d7b9ee08be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:04:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9bd0b244-d31b-4ce9-a395-f0d7b9ee08be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5723 chars]
	I1206 19:17:38.155365   86706 request.go:629] Waited for 195.353177ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.125:8443/api/v1/nodes/multinode-593099-m02
	I1206 19:17:38.155443   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099-m02
	I1206 19:17:38.155449   86706 round_trippers.go:469] Request Headers:
	I1206 19:17:38.155460   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:17:38.155471   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:17:38.158222   86706 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:17:38.158241   86706 round_trippers.go:577] Response Headers:
	I1206 19:17:38.158247   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:17:38.158258   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:17:38.158265   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:17:38.158271   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:17:38 GMT
	I1206 19:17:38.158278   86706 round_trippers.go:580]     Audit-Id: 99444dea-48fb-4e96-9ed7-d1ec76f0efaa
	I1206 19:17:38.158283   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:17:38.158712   86706 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099-m02","uid":"6ea06f34-1ede-44f1-9662-8cba0265fa0f","resourceVersion":"1182","creationTimestamp":"2023-12-06T19:15:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_06T19_17_37_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:15:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3992 chars]
	I1206 19:17:38.158993   86706 pod_ready.go:92] pod "kube-proxy-ggxmb" in "kube-system" namespace has status "Ready":"True"
	I1206 19:17:38.159007   86706 pod_ready.go:81] duration metric: took 371.071385ms waiting for pod "kube-proxy-ggxmb" in "kube-system" namespace to be "Ready" ...
	I1206 19:17:38.159020   86706 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-thqkt" in "kube-system" namespace to be "Ready" ...
	I1206 19:17:38.355503   86706 request.go:629] Waited for 196.390766ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/kube-proxy-thqkt
	I1206 19:17:38.355565   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/kube-proxy-thqkt
	I1206 19:17:38.355570   86706 round_trippers.go:469] Request Headers:
	I1206 19:17:38.355578   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:17:38.355587   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:17:38.358952   86706 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1206 19:17:38.358971   86706 round_trippers.go:577] Response Headers:
	I1206 19:17:38.358978   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:17:38.358984   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:17:38 GMT
	I1206 19:17:38.358989   86706 round_trippers.go:580]     Audit-Id: 6178e584-2805-4655-a916-b9174cc5e676
	I1206 19:17:38.358994   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:17:38.358999   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:17:38.359004   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:17:38.359167   86706 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-thqkt","generateName":"kube-proxy-","namespace":"kube-system","uid":"0012fda4-56e7-4054-ab90-1704569e66e8","resourceVersion":"809","creationTimestamp":"2023-12-06T19:03:43Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"9bd0b244-d31b-4ce9-a395-f0d7b9ee08be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9bd0b244-d31b-4ce9-a395-f0d7b9ee08be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I1206 19:17:38.554909   86706 request.go:629] Waited for 195.302765ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:17:38.554978   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:17:38.554988   86706 round_trippers.go:469] Request Headers:
	I1206 19:17:38.554997   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:17:38.555007   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:17:38.558383   86706 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1206 19:17:38.558408   86706 round_trippers.go:577] Response Headers:
	I1206 19:17:38.558416   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:17:38.558421   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:17:38.558426   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:17:38 GMT
	I1206 19:17:38.558431   86706 round_trippers.go:580]     Audit-Id: 46f3b4ba-29c6-4000-b101-4b69ec9e3061
	I1206 19:17:38.558437   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:17:38.558444   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:17:38.558560   86706 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"857","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1206 19:17:38.558868   86706 pod_ready.go:92] pod "kube-proxy-thqkt" in "kube-system" namespace has status "Ready":"True"
	I1206 19:17:38.558883   86706 pod_ready.go:81] duration metric: took 399.855477ms waiting for pod "kube-proxy-thqkt" in "kube-system" namespace to be "Ready" ...
	I1206 19:17:38.558892   86706 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tp2wm" in "kube-system" namespace to be "Ready" ...
	I1206 19:17:38.755313   86706 request.go:629] Waited for 196.347375ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tp2wm
	I1206 19:17:38.755392   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tp2wm
	I1206 19:17:38.755397   86706 round_trippers.go:469] Request Headers:
	I1206 19:17:38.755406   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:17:38.755415   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:17:38.758249   86706 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:17:38.758273   86706 round_trippers.go:577] Response Headers:
	I1206 19:17:38.758281   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:17:38.758286   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:17:38.758291   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:17:38 GMT
	I1206 19:17:38.758296   86706 round_trippers.go:580]     Audit-Id: b25e1195-fc37-4b80-b3c8-df838d1b8291
	I1206 19:17:38.758301   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:17:38.758307   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:17:38.758432   86706 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tp2wm","generateName":"kube-proxy-","namespace":"kube-system","uid":"366b51c9-af8f-4bd5-8200-dc43c4a3017c","resourceVersion":"1197","creationTimestamp":"2023-12-06T19:05:15Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"9bd0b244-d31b-4ce9-a395-f0d7b9ee08be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:05:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9bd0b244-d31b-4ce9-a395-f0d7b9ee08be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5731 chars]
	I1206 19:17:38.955284   86706 request.go:629] Waited for 196.420206ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.125:8443/api/v1/nodes/multinode-593099-m03
	I1206 19:17:38.955381   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099-m03
	I1206 19:17:38.955388   86706 round_trippers.go:469] Request Headers:
	I1206 19:17:38.955402   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:17:38.955415   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:17:38.958396   86706 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 19:17:38.958424   86706 round_trippers.go:577] Response Headers:
	I1206 19:17:38.958431   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:17:38 GMT
	I1206 19:17:38.958437   86706 round_trippers.go:580]     Audit-Id: e9c61005-cbe3-4fdf-aedd-6e9167f7e8a4
	I1206 19:17:38.958442   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:17:38.958447   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:17:38.958453   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:17:38.958458   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:17:38.958952   86706 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099-m03","uid":"8ba2bdfd-d110-43fd-b33e-bd8e5a71e7b5","resourceVersion":"1183","creationTimestamp":"2023-12-06T19:17:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_06T19_17_37_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:17:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3994 chars]
	I1206 19:17:38.959238   86706 pod_ready.go:92] pod "kube-proxy-tp2wm" in "kube-system" namespace has status "Ready":"True"
	I1206 19:17:38.959252   86706 pod_ready.go:81] duration metric: took 400.355127ms waiting for pod "kube-proxy-tp2wm" in "kube-system" namespace to be "Ready" ...
	I1206 19:17:38.959263   86706 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-593099" in "kube-system" namespace to be "Ready" ...
	I1206 19:17:39.155780   86706 request.go:629] Waited for 196.424994ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-593099
	I1206 19:17:39.155856   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-593099
	I1206 19:17:39.155863   86706 round_trippers.go:469] Request Headers:
	I1206 19:17:39.155875   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:17:39.155887   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:17:39.159202   86706 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1206 19:17:39.159223   86706 round_trippers.go:577] Response Headers:
	I1206 19:17:39.159230   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:17:39.159235   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:17:39.159240   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:17:39 GMT
	I1206 19:17:39.159245   86706 round_trippers.go:580]     Audit-Id: 18508c90-7a86-473b-ae10-fde2d31264ed
	I1206 19:17:39.159250   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:17:39.159255   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:17:39.159583   86706 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-593099","namespace":"kube-system","uid":"7ae8a659-33ba-4e2b-9211-8d84efe7e5a4","resourceVersion":"831","creationTimestamp":"2023-12-06T19:03:28Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"c031365adbae2937d228cc911fbfd7d4","kubernetes.io/config.mirror":"c031365adbae2937d228cc911fbfd7d4","kubernetes.io/config.seen":"2023-12-06T19:03:21.456083881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-06T19:03:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I1206 19:17:39.355382   86706 request.go:629] Waited for 195.380179ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:17:39.355451   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes/multinode-593099
	I1206 19:17:39.355461   86706 round_trippers.go:469] Request Headers:
	I1206 19:17:39.355475   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:17:39.355491   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:17:39.358979   86706 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1206 19:17:39.359010   86706 round_trippers.go:577] Response Headers:
	I1206 19:17:39.359020   86706 round_trippers.go:580]     Audit-Id: 9191206b-7f41-42cf-a395-b68a496b7453
	I1206 19:17:39.359029   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:17:39.359037   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:17:39.359046   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:17:39.359054   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:17:39.359063   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:17:39 GMT
	I1206 19:17:39.359259   86706 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"857","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T19:03:27Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1206 19:17:39.359714   86706 pod_ready.go:92] pod "kube-scheduler-multinode-593099" in "kube-system" namespace has status "Ready":"True"
	I1206 19:17:39.359736   86706 pod_ready.go:81] duration metric: took 400.466427ms waiting for pod "kube-scheduler-multinode-593099" in "kube-system" namespace to be "Ready" ...
	I1206 19:17:39.359753   86706 pod_ready.go:38] duration metric: took 1.601449329s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 19:17:39.359774   86706 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 19:17:39.359830   86706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 19:17:39.375448   86706 system_svc.go:56] duration metric: took 15.666754ms WaitForService to wait for kubelet.
	I1206 19:17:39.375474   86706 kubeadm.go:581] duration metric: took 1.640157579s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1206 19:17:39.375506   86706 node_conditions.go:102] verifying NodePressure condition ...
	I1206 19:17:39.554887   86706 request.go:629] Waited for 179.287794ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.125:8443/api/v1/nodes
	I1206 19:17:39.554946   86706 round_trippers.go:463] GET https://192.168.39.125:8443/api/v1/nodes
	I1206 19:17:39.554950   86706 round_trippers.go:469] Request Headers:
	I1206 19:17:39.554963   86706 round_trippers.go:473]     Accept: application/json, */*
	I1206 19:17:39.554978   86706 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 19:17:39.559133   86706 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1206 19:17:39.559214   86706 round_trippers.go:577] Response Headers:
	I1206 19:17:39.559231   86706 round_trippers.go:580]     Audit-Id: d7e4cb99-9e74-45db-95fa-fb7e18a47c91
	I1206 19:17:39.559241   86706 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 19:17:39.559254   86706 round_trippers.go:580]     Content-Type: application/json
	I1206 19:17:39.559285   86706 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 73849cc6-4783-481c-83f6-377a169322e8
	I1206 19:17:39.559298   86706 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8c19983d-8b4c-4ebf-accb-3ada4240f6df
	I1206 19:17:39.559308   86706 round_trippers.go:580]     Date: Wed, 06 Dec 2023 19:17:39 GMT
	I1206 19:17:39.559970   86706 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1199"},"items":[{"metadata":{"name":"multinode-593099","uid":"4d5b5b79-73b5-490c-8567-916215363236","resourceVersion":"857","creationTimestamp":"2023-12-06T19:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-593099","kubernetes.io/os":"linux","minikube.k8s.io/commit":"31a3600ce72029d920a55140bbc6d0705e357503","minikube.k8s.io/name":"multinode-593099","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T19_03_31_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 16237 chars]
	I1206 19:17:39.560709   86706 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1206 19:17:39.560731   86706 node_conditions.go:123] node cpu capacity is 2
	I1206 19:17:39.560741   86706 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1206 19:17:39.560748   86706 node_conditions.go:123] node cpu capacity is 2
	I1206 19:17:39.560751   86706 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1206 19:17:39.560755   86706 node_conditions.go:123] node cpu capacity is 2
	I1206 19:17:39.560761   86706 node_conditions.go:105] duration metric: took 185.25018ms to run NodePressure ...
	I1206 19:17:39.560773   86706 start.go:228] waiting for startup goroutines ...
	I1206 19:17:39.560854   86706 start.go:242] writing updated cluster config ...
	I1206 19:17:39.561179   86706 ssh_runner.go:195] Run: rm -f paused
	I1206 19:17:39.611379   86706 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1206 19:17:39.613990   86706 out.go:177] * Done! kubectl is now configured to use "multinode-593099" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-12-06 19:13:21 UTC, ends at Wed 2023-12-06 19:17:40 UTC. --
	Dec 06 19:17:40 multinode-593099 crio[715]: time="2023-12-06 19:17:40.835031161Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701890260835017342,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=be841808-a99e-40a0-a94f-83433d0d4da3 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 19:17:40 multinode-593099 crio[715]: time="2023-12-06 19:17:40.835724612Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=32dfe904-cfa5-438f-aa51-c297433d34b9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 19:17:40 multinode-593099 crio[715]: time="2023-12-06 19:17:40.835776401Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=32dfe904-cfa5-438f-aa51-c297433d34b9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 19:17:40 multinode-593099 crio[715]: time="2023-12-06 19:17:40.835966676Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:971a52aed093ee97daf8200843233b2154c757510cc19de110ba9144306f065f,PodSandboxId:af15355116785b7744b7f4336e3cc3e926a2ec1a32d514c4f6c0542e9db6edfb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701890069759745330,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35974b37-5aff-4940-8e2d-5fec9d1e2166,},Annotations:map[string]string{io.kubernetes.container.hash: 66b0258c,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff4a555f2932851e88bb0888c5eb0174bf0facc0fd42f0ca3957f1a773467f56,PodSandboxId:1d531a83e1e3a00d0b66c13c54ea6d59cd28f4df7d7a1d2c8b3eed5dcabbe439,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1701890046993109623,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-x24l4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b2c96072-6364-4b62-9a74-2aa19b4a2e69,},Annotations:map[string]string{io.kubernetes.container.hash: 34ab53fc,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38901bee62724f792b0a611a1d6a73b158a0c741ee455bfc99b2500b6d7a3a7f,PodSandboxId:5c2579097e33ea39d3d15ab060e221acd246754fb7e53063e955266448198a35,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701890045789237308,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-h6rcq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85247dde-4cee-482e-8f9b-a9e8f4e7172e,},Annotations:map[string]string{io.kubernetes.container.hash: fcfaa392,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2c300d5e77d967e3a21edb35af30c9156dbabf1e7b55a2bec92c866ec6f77ca,PodSandboxId:b895227b545fd638baaf1adc2ab5bbd93407669cbd4c6be5f9f7cd9d56e8e9b5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1701890041093727601,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x2r64,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 1dafec99-c18b-40ca-8b9d-b5d520390c8c,},Annotations:map[string]string{io.kubernetes.container.hash: de221942,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4061ed41067ead9272fc4dce4c7ccc3fd971c38df3c12f2d801d2723da56fa41,PodSandboxId:af15355116785b7744b7f4336e3cc3e926a2ec1a32d514c4f6c0542e9db6edfb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1701890039294928415,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 35974b37-5aff-4940-8e2d-5fec9d1e2166,},Annotations:map[string]string{io.kubernetes.container.hash: 66b0258c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9140d654442db5c80850a653b06c0adc874f3292f581f5e2a60139455cab9654,PodSandboxId:e7dcd9ea3fbeec812bed260d385ed72f51d12d13cfac2dcd67dca380ec273608,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701890039226143724,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-thqkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0012fda4-56e7-4054-ab90-1704569e
66e8,},Annotations:map[string]string{io.kubernetes.container.hash: 69ba80c7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c8b737c47dddde6ec1ee290b97efdfb35cac2e548e52424e1c461ede53fea0f,PodSandboxId:1f6f73c64d12e3efd238e88725584aa21d63fb339a88bfa866bb50953748faba,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701890032171056422,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-593099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ce14df981100c86a2ade94d91a33196,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d0de8d55,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acb5e4dad04660dddf08d1a55d990c4578fab59d77ed381aebd4d26801debafe,PodSandboxId:ac990dfec48f1d66f425ccfa65e8cefb37aa2bbd37f8b79a5de4a05310c0a8a3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701890031981348149,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-593099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c031365adbae2937d228cc911fbfd7d4,},Annotations:map[string]string{io.kubernetes.container.has
h: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa699613722652713926b5f6b13e591cabf82055259700a80a1f4051018156ca,PodSandboxId:da48678d46b5821d98848ca6a59c7036abe105d9655349a1fd39058d3e912b01,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701890031815167871,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-593099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f1a77aff616164d10d488d27b08307,},Annotations:map[string]string{io.
kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de47f2ff329f73b3ba009a3cc3adacbeb0789b3a321ed17eecefc5e80a4b8c3d,PodSandboxId:61bd4319f2b06a238fc32f6d882077356a96adcf0864a5a145af6538c7c7f4de,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701890031430250066,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-593099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6290493e5e32b3d1986ab88f381ba97f,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 9422613e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=32dfe904-cfa5-438f-aa51-c297433d34b9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 19:17:40 multinode-593099 crio[715]: time="2023-12-06 19:17:40.881846145Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=51631cf4-e5b6-48bb-8558-29cba6969578 name=/runtime.v1.RuntimeService/Version
	Dec 06 19:17:40 multinode-593099 crio[715]: time="2023-12-06 19:17:40.881904855Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=51631cf4-e5b6-48bb-8558-29cba6969578 name=/runtime.v1.RuntimeService/Version
	Dec 06 19:17:40 multinode-593099 crio[715]: time="2023-12-06 19:17:40.883097316Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=5e9ecb1b-091e-411f-8fe0-e7e0d66888a8 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 19:17:40 multinode-593099 crio[715]: time="2023-12-06 19:17:40.883507625Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701890260883493834,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=5e9ecb1b-091e-411f-8fe0-e7e0d66888a8 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 19:17:40 multinode-593099 crio[715]: time="2023-12-06 19:17:40.884386674Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=aecf88bd-9998-4a5d-a4b4-3a78e130ff3d name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 19:17:40 multinode-593099 crio[715]: time="2023-12-06 19:17:40.884434281Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=aecf88bd-9998-4a5d-a4b4-3a78e130ff3d name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 19:17:40 multinode-593099 crio[715]: time="2023-12-06 19:17:40.884749755Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:971a52aed093ee97daf8200843233b2154c757510cc19de110ba9144306f065f,PodSandboxId:af15355116785b7744b7f4336e3cc3e926a2ec1a32d514c4f6c0542e9db6edfb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701890069759745330,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35974b37-5aff-4940-8e2d-5fec9d1e2166,},Annotations:map[string]string{io.kubernetes.container.hash: 66b0258c,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff4a555f2932851e88bb0888c5eb0174bf0facc0fd42f0ca3957f1a773467f56,PodSandboxId:1d531a83e1e3a00d0b66c13c54ea6d59cd28f4df7d7a1d2c8b3eed5dcabbe439,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1701890046993109623,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-x24l4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b2c96072-6364-4b62-9a74-2aa19b4a2e69,},Annotations:map[string]string{io.kubernetes.container.hash: 34ab53fc,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38901bee62724f792b0a611a1d6a73b158a0c741ee455bfc99b2500b6d7a3a7f,PodSandboxId:5c2579097e33ea39d3d15ab060e221acd246754fb7e53063e955266448198a35,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701890045789237308,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-h6rcq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85247dde-4cee-482e-8f9b-a9e8f4e7172e,},Annotations:map[string]string{io.kubernetes.container.hash: fcfaa392,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2c300d5e77d967e3a21edb35af30c9156dbabf1e7b55a2bec92c866ec6f77ca,PodSandboxId:b895227b545fd638baaf1adc2ab5bbd93407669cbd4c6be5f9f7cd9d56e8e9b5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1701890041093727601,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x2r64,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 1dafec99-c18b-40ca-8b9d-b5d520390c8c,},Annotations:map[string]string{io.kubernetes.container.hash: de221942,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4061ed41067ead9272fc4dce4c7ccc3fd971c38df3c12f2d801d2723da56fa41,PodSandboxId:af15355116785b7744b7f4336e3cc3e926a2ec1a32d514c4f6c0542e9db6edfb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1701890039294928415,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 35974b37-5aff-4940-8e2d-5fec9d1e2166,},Annotations:map[string]string{io.kubernetes.container.hash: 66b0258c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9140d654442db5c80850a653b06c0adc874f3292f581f5e2a60139455cab9654,PodSandboxId:e7dcd9ea3fbeec812bed260d385ed72f51d12d13cfac2dcd67dca380ec273608,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701890039226143724,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-thqkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0012fda4-56e7-4054-ab90-1704569e
66e8,},Annotations:map[string]string{io.kubernetes.container.hash: 69ba80c7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c8b737c47dddde6ec1ee290b97efdfb35cac2e548e52424e1c461ede53fea0f,PodSandboxId:1f6f73c64d12e3efd238e88725584aa21d63fb339a88bfa866bb50953748faba,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701890032171056422,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-593099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ce14df981100c86a2ade94d91a33196,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d0de8d55,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acb5e4dad04660dddf08d1a55d990c4578fab59d77ed381aebd4d26801debafe,PodSandboxId:ac990dfec48f1d66f425ccfa65e8cefb37aa2bbd37f8b79a5de4a05310c0a8a3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701890031981348149,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-593099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c031365adbae2937d228cc911fbfd7d4,},Annotations:map[string]string{io.kubernetes.container.has
h: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa699613722652713926b5f6b13e591cabf82055259700a80a1f4051018156ca,PodSandboxId:da48678d46b5821d98848ca6a59c7036abe105d9655349a1fd39058d3e912b01,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701890031815167871,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-593099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f1a77aff616164d10d488d27b08307,},Annotations:map[string]string{io.
kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de47f2ff329f73b3ba009a3cc3adacbeb0789b3a321ed17eecefc5e80a4b8c3d,PodSandboxId:61bd4319f2b06a238fc32f6d882077356a96adcf0864a5a145af6538c7c7f4de,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701890031430250066,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-593099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6290493e5e32b3d1986ab88f381ba97f,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 9422613e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=aecf88bd-9998-4a5d-a4b4-3a78e130ff3d name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 19:17:40 multinode-593099 crio[715]: time="2023-12-06 19:17:40.926414430Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=3273e7dc-1b84-4681-a22a-532e655e0de5 name=/runtime.v1.RuntimeService/Version
	Dec 06 19:17:40 multinode-593099 crio[715]: time="2023-12-06 19:17:40.926475328Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=3273e7dc-1b84-4681-a22a-532e655e0de5 name=/runtime.v1.RuntimeService/Version
	Dec 06 19:17:40 multinode-593099 crio[715]: time="2023-12-06 19:17:40.928223358Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=23caa87b-a88b-4ecf-b363-6ddd6d63ef8a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 19:17:40 multinode-593099 crio[715]: time="2023-12-06 19:17:40.928691827Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701890260928677591,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=23caa87b-a88b-4ecf-b363-6ddd6d63ef8a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 19:17:40 multinode-593099 crio[715]: time="2023-12-06 19:17:40.929891490Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5c59b613-bf3d-4045-88f2-0ed677568f9f name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 19:17:40 multinode-593099 crio[715]: time="2023-12-06 19:17:40.929942753Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5c59b613-bf3d-4045-88f2-0ed677568f9f name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 19:17:40 multinode-593099 crio[715]: time="2023-12-06 19:17:40.930221276Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:971a52aed093ee97daf8200843233b2154c757510cc19de110ba9144306f065f,PodSandboxId:af15355116785b7744b7f4336e3cc3e926a2ec1a32d514c4f6c0542e9db6edfb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701890069759745330,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35974b37-5aff-4940-8e2d-5fec9d1e2166,},Annotations:map[string]string{io.kubernetes.container.hash: 66b0258c,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff4a555f2932851e88bb0888c5eb0174bf0facc0fd42f0ca3957f1a773467f56,PodSandboxId:1d531a83e1e3a00d0b66c13c54ea6d59cd28f4df7d7a1d2c8b3eed5dcabbe439,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1701890046993109623,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-x24l4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b2c96072-6364-4b62-9a74-2aa19b4a2e69,},Annotations:map[string]string{io.kubernetes.container.hash: 34ab53fc,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38901bee62724f792b0a611a1d6a73b158a0c741ee455bfc99b2500b6d7a3a7f,PodSandboxId:5c2579097e33ea39d3d15ab060e221acd246754fb7e53063e955266448198a35,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701890045789237308,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-h6rcq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85247dde-4cee-482e-8f9b-a9e8f4e7172e,},Annotations:map[string]string{io.kubernetes.container.hash: fcfaa392,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2c300d5e77d967e3a21edb35af30c9156dbabf1e7b55a2bec92c866ec6f77ca,PodSandboxId:b895227b545fd638baaf1adc2ab5bbd93407669cbd4c6be5f9f7cd9d56e8e9b5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1701890041093727601,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x2r64,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 1dafec99-c18b-40ca-8b9d-b5d520390c8c,},Annotations:map[string]string{io.kubernetes.container.hash: de221942,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4061ed41067ead9272fc4dce4c7ccc3fd971c38df3c12f2d801d2723da56fa41,PodSandboxId:af15355116785b7744b7f4336e3cc3e926a2ec1a32d514c4f6c0542e9db6edfb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1701890039294928415,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 35974b37-5aff-4940-8e2d-5fec9d1e2166,},Annotations:map[string]string{io.kubernetes.container.hash: 66b0258c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9140d654442db5c80850a653b06c0adc874f3292f581f5e2a60139455cab9654,PodSandboxId:e7dcd9ea3fbeec812bed260d385ed72f51d12d13cfac2dcd67dca380ec273608,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701890039226143724,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-thqkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0012fda4-56e7-4054-ab90-1704569e
66e8,},Annotations:map[string]string{io.kubernetes.container.hash: 69ba80c7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c8b737c47dddde6ec1ee290b97efdfb35cac2e548e52424e1c461ede53fea0f,PodSandboxId:1f6f73c64d12e3efd238e88725584aa21d63fb339a88bfa866bb50953748faba,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701890032171056422,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-593099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ce14df981100c86a2ade94d91a33196,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d0de8d55,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acb5e4dad04660dddf08d1a55d990c4578fab59d77ed381aebd4d26801debafe,PodSandboxId:ac990dfec48f1d66f425ccfa65e8cefb37aa2bbd37f8b79a5de4a05310c0a8a3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701890031981348149,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-593099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c031365adbae2937d228cc911fbfd7d4,},Annotations:map[string]string{io.kubernetes.container.has
h: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa699613722652713926b5f6b13e591cabf82055259700a80a1f4051018156ca,PodSandboxId:da48678d46b5821d98848ca6a59c7036abe105d9655349a1fd39058d3e912b01,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701890031815167871,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-593099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f1a77aff616164d10d488d27b08307,},Annotations:map[string]string{io.
kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de47f2ff329f73b3ba009a3cc3adacbeb0789b3a321ed17eecefc5e80a4b8c3d,PodSandboxId:61bd4319f2b06a238fc32f6d882077356a96adcf0864a5a145af6538c7c7f4de,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701890031430250066,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-593099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6290493e5e32b3d1986ab88f381ba97f,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 9422613e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5c59b613-bf3d-4045-88f2-0ed677568f9f name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 19:17:40 multinode-593099 crio[715]: time="2023-12-06 19:17:40.976024441Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=f42c6e12-17c9-4796-a58e-1ddf75719005 name=/runtime.v1.RuntimeService/Version
	Dec 06 19:17:40 multinode-593099 crio[715]: time="2023-12-06 19:17:40.976083430Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=f42c6e12-17c9-4796-a58e-1ddf75719005 name=/runtime.v1.RuntimeService/Version
	Dec 06 19:17:40 multinode-593099 crio[715]: time="2023-12-06 19:17:40.977324022Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=c8739e32-f069-4b70-bf1e-f857ca084d3b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 19:17:40 multinode-593099 crio[715]: time="2023-12-06 19:17:40.977783602Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701890260977768565,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=c8739e32-f069-4b70-bf1e-f857ca084d3b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 19:17:40 multinode-593099 crio[715]: time="2023-12-06 19:17:40.978478529Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=cb0c0cbb-c9c2-4c01-8113-94c8c92dfcae name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 19:17:40 multinode-593099 crio[715]: time="2023-12-06 19:17:40.978587058Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=cb0c0cbb-c9c2-4c01-8113-94c8c92dfcae name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 19:17:40 multinode-593099 crio[715]: time="2023-12-06 19:17:40.978785549Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:971a52aed093ee97daf8200843233b2154c757510cc19de110ba9144306f065f,PodSandboxId:af15355116785b7744b7f4336e3cc3e926a2ec1a32d514c4f6c0542e9db6edfb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701890069759745330,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35974b37-5aff-4940-8e2d-5fec9d1e2166,},Annotations:map[string]string{io.kubernetes.container.hash: 66b0258c,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff4a555f2932851e88bb0888c5eb0174bf0facc0fd42f0ca3957f1a773467f56,PodSandboxId:1d531a83e1e3a00d0b66c13c54ea6d59cd28f4df7d7a1d2c8b3eed5dcabbe439,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1701890046993109623,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-x24l4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b2c96072-6364-4b62-9a74-2aa19b4a2e69,},Annotations:map[string]string{io.kubernetes.container.hash: 34ab53fc,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38901bee62724f792b0a611a1d6a73b158a0c741ee455bfc99b2500b6d7a3a7f,PodSandboxId:5c2579097e33ea39d3d15ab060e221acd246754fb7e53063e955266448198a35,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701890045789237308,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-h6rcq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85247dde-4cee-482e-8f9b-a9e8f4e7172e,},Annotations:map[string]string{io.kubernetes.container.hash: fcfaa392,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2c300d5e77d967e3a21edb35af30c9156dbabf1e7b55a2bec92c866ec6f77ca,PodSandboxId:b895227b545fd638baaf1adc2ab5bbd93407669cbd4c6be5f9f7cd9d56e8e9b5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1701890041093727601,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x2r64,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 1dafec99-c18b-40ca-8b9d-b5d520390c8c,},Annotations:map[string]string{io.kubernetes.container.hash: de221942,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4061ed41067ead9272fc4dce4c7ccc3fd971c38df3c12f2d801d2723da56fa41,PodSandboxId:af15355116785b7744b7f4336e3cc3e926a2ec1a32d514c4f6c0542e9db6edfb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1701890039294928415,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 35974b37-5aff-4940-8e2d-5fec9d1e2166,},Annotations:map[string]string{io.kubernetes.container.hash: 66b0258c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9140d654442db5c80850a653b06c0adc874f3292f581f5e2a60139455cab9654,PodSandboxId:e7dcd9ea3fbeec812bed260d385ed72f51d12d13cfac2dcd67dca380ec273608,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701890039226143724,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-thqkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0012fda4-56e7-4054-ab90-1704569e
66e8,},Annotations:map[string]string{io.kubernetes.container.hash: 69ba80c7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c8b737c47dddde6ec1ee290b97efdfb35cac2e548e52424e1c461ede53fea0f,PodSandboxId:1f6f73c64d12e3efd238e88725584aa21d63fb339a88bfa866bb50953748faba,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701890032171056422,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-593099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ce14df981100c86a2ade94d91a33196,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d0de8d55,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acb5e4dad04660dddf08d1a55d990c4578fab59d77ed381aebd4d26801debafe,PodSandboxId:ac990dfec48f1d66f425ccfa65e8cefb37aa2bbd37f8b79a5de4a05310c0a8a3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701890031981348149,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-593099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c031365adbae2937d228cc911fbfd7d4,},Annotations:map[string]string{io.kubernetes.container.has
h: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa699613722652713926b5f6b13e591cabf82055259700a80a1f4051018156ca,PodSandboxId:da48678d46b5821d98848ca6a59c7036abe105d9655349a1fd39058d3e912b01,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701890031815167871,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-593099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f1a77aff616164d10d488d27b08307,},Annotations:map[string]string{io.
kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de47f2ff329f73b3ba009a3cc3adacbeb0789b3a321ed17eecefc5e80a4b8c3d,PodSandboxId:61bd4319f2b06a238fc32f6d882077356a96adcf0864a5a145af6538c7c7f4de,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701890031430250066,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-593099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6290493e5e32b3d1986ab88f381ba97f,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 9422613e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=cb0c0cbb-c9c2-4c01-8113-94c8c92dfcae name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	971a52aed093e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       2                   af15355116785       storage-provisioner
	ff4a555f29328       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   1                   1d531a83e1e3a       busybox-5bc68d56bd-x24l4
	38901bee62724       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      3 minutes ago       Running             coredns                   1                   5c2579097e33e       coredns-5dd5756b68-h6rcq
	e2c300d5e77d9       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      3 minutes ago       Running             kindnet-cni               1                   b895227b545fd       kindnet-x2r64
	4061ed41067ea       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Exited              storage-provisioner       1                   af15355116785       storage-provisioner
	9140d654442db       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      3 minutes ago       Running             kube-proxy                1                   e7dcd9ea3fbee       kube-proxy-thqkt
	8c8b737c47ddd       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      3 minutes ago       Running             etcd                      1                   1f6f73c64d12e       etcd-multinode-593099
	acb5e4dad0466       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      3 minutes ago       Running             kube-scheduler            1                   ac990dfec48f1       kube-scheduler-multinode-593099
	fa69961372265       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      3 minutes ago       Running             kube-controller-manager   1                   da48678d46b58       kube-controller-manager-multinode-593099
	de47f2ff329f7       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      3 minutes ago       Running             kube-apiserver            1                   61bd4319f2b06       kube-apiserver-multinode-593099
	
	* 
	* ==> coredns [38901bee62724f792b0a611a1d6a73b158a0c741ee455bfc99b2500b6d7a3a7f] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:55543 - 31697 "HINFO IN 8771735587459902539.8542103147593438616. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.029532805s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-593099
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-593099
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=31a3600ce72029d920a55140bbc6d0705e357503
	                    minikube.k8s.io/name=multinode-593099
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_06T19_03_31_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 06 Dec 2023 19:03:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-593099
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 06 Dec 2023 19:17:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 06 Dec 2023 19:14:27 +0000   Wed, 06 Dec 2023 19:03:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 06 Dec 2023 19:14:27 +0000   Wed, 06 Dec 2023 19:03:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 06 Dec 2023 19:14:27 +0000   Wed, 06 Dec 2023 19:03:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 06 Dec 2023 19:14:27 +0000   Wed, 06 Dec 2023 19:14:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.125
	  Hostname:    multinode-593099
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 9c9748df5a624dfd9135ae5ea21210d0
	  System UUID:                9c9748df-5a62-4dfd-9135-ae5ea21210d0
	  Boot ID:                    d1dac90d-9533-48e9-bcfb-f0a31abd1677
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-x24l4                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-5dd5756b68-h6rcq                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-multinode-593099                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-x2r64                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-multinode-593099             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-multinode-593099    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-thqkt                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-multinode-593099             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 13m                    kube-proxy       
	  Normal  Starting                 3m41s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m                    kubelet          Node multinode-593099 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                    kubelet          Node multinode-593099 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                    kubelet          Node multinode-593099 status is now: NodeHasSufficientPID
	  Normal  Starting                 14m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           13m                    node-controller  Node multinode-593099 event: Registered Node multinode-593099 in Controller
	  Normal  NodeReady                13m                    kubelet          Node multinode-593099 status is now: NodeReady
	  Normal  Starting                 3m51s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m51s (x8 over 3m51s)  kubelet          Node multinode-593099 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m51s (x8 over 3m51s)  kubelet          Node multinode-593099 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m51s (x7 over 3m51s)  kubelet          Node multinode-593099 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m32s                  node-controller  Node multinode-593099 event: Registered Node multinode-593099 in Controller
	
	
	Name:               multinode-593099-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-593099-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=31a3600ce72029d920a55140bbc6d0705e357503
	                    minikube.k8s.io/name=multinode-593099
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2023_12_06T19_17_37_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 06 Dec 2023 19:15:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-593099-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 06 Dec 2023 19:17:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 06 Dec 2023 19:15:51 +0000   Wed, 06 Dec 2023 19:15:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 06 Dec 2023 19:15:51 +0000   Wed, 06 Dec 2023 19:15:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 06 Dec 2023 19:15:51 +0000   Wed, 06 Dec 2023 19:15:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 06 Dec 2023 19:15:51 +0000   Wed, 06 Dec 2023 19:15:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.6
	  Hostname:    multinode-593099-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 859ea87d65b84b6c993011d17f29b172
	  System UUID:                859ea87d-65b8-4b6c-9930-11d17f29b172
	  Boot ID:                    d3d11353-8920-4f9a-adca-538cbebb3918
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-h9jdf    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	  kube-system                 kindnet-2s5b8               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-ggxmb            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From        Message
	  ----     ------                   ----                   ----        -------
	  Normal   Starting                 13m                    kube-proxy  
	  Normal   Starting                 107s                   kube-proxy  
	  Normal   NodeHasSufficientMemory  13m (x5 over 13m)      kubelet     Node multinode-593099-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x5 over 13m)      kubelet     Node multinode-593099-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x5 over 13m)      kubelet     Node multinode-593099-m02 status is now: NodeHasSufficientPID
	  Normal   NodeReady                13m                    kubelet     Node multinode-593099-m02 status is now: NodeReady
	  Normal   NodeNotReady             2m54s                  kubelet     Node multinode-593099-m02 status is now: NodeNotReady
	  Warning  ContainerGCFailed        2m16s (x2 over 3m16s)  kubelet     rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotSchedulable       112s                   kubelet     Node multinode-593099-m02 status is now: NodeNotSchedulable
	  Normal   Starting                 110s                   kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory  110s (x2 over 110s)    kubelet     Node multinode-593099-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    110s (x2 over 110s)    kubelet     Node multinode-593099-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     110s (x2 over 110s)    kubelet     Node multinode-593099-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  110s                   kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeReady                110s                   kubelet     Node multinode-593099-m02 status is now: NodeReady
	
	
	Name:               multinode-593099-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-593099-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=31a3600ce72029d920a55140bbc6d0705e357503
	                    minikube.k8s.io/name=multinode-593099
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2023_12_06T19_17_37_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 06 Dec 2023 19:17:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:              Failed to get lease: leases.coordination.k8s.io "multinode-593099-m03" not found
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 06 Dec 2023 19:17:36 +0000   Wed, 06 Dec 2023 19:17:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 06 Dec 2023 19:17:36 +0000   Wed, 06 Dec 2023 19:17:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 06 Dec 2023 19:17:36 +0000   Wed, 06 Dec 2023 19:17:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 06 Dec 2023 19:17:36 +0000   Wed, 06 Dec 2023 19:17:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.194
	  Hostname:    multinode-593099-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 abcc59a8f9f54cb986eec4c0f44a75a7
	  System UUID:                abcc59a8-f9f5-4cb9-86ee-c4c0f44a75a7
	  Boot ID:                    0add3fd9-2a37-48e4-b1b0-d1d9c86eb1f4
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-5d8qw    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         113s
	  kube-system                 kindnet-mbkkj               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-tp2wm            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                 From        Message
	  ----     ------                   ----                ----        -------
	  Normal   Starting                 11m                 kube-proxy  
	  Normal   Starting                 12m                 kube-proxy  
	  Normal   Starting                 3s                  kube-proxy  
	  Normal   NodeHasNoDiskPressure    12m (x5 over 12m)   kubelet     Node multinode-593099-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x5 over 12m)   kubelet     Node multinode-593099-m03 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  12m (x5 over 12m)   kubelet     Node multinode-593099-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeReady                12m                 kubelet     Node multinode-593099-m03 status is now: NodeReady
	  Normal   Starting                 11m                 kubelet     Starting kubelet.
	  Normal   NodeAllocatableEnforced  11m                 kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeReady                11m                 kubelet     Node multinode-593099-m03 status is now: NodeReady
	  Normal   NodeNotReady             72s                 kubelet     Node multinode-593099-m03 status is now: NodeNotReady
	  Warning  ContainerGCFailed        43s (x2 over 103s)  kubelet     rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeHasSufficientMemory  6s (x4 over 11m)    kubelet     Node multinode-593099-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6s (x4 over 11m)    kubelet     Node multinode-593099-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6s (x4 over 11m)    kubelet     Node multinode-593099-m03 status is now: NodeHasSufficientPID
	  Normal   Starting                 5s                  kubelet     Starting kubelet.
	  Normal   NodeHasNoDiskPressure    5s (x2 over 5s)     kubelet     Node multinode-593099-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5s (x2 over 5s)     kubelet     Node multinode-593099-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  5s                  kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeReady                5s                  kubelet     Node multinode-593099-m03 status is now: NodeReady
	  Normal   NodeHasSufficientMemory  5s (x2 over 5s)     kubelet     Node multinode-593099-m03 status is now: NodeHasSufficientMemory
	
	* 
	* ==> dmesg <==
	* [Dec 6 19:13] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.067484] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.407386] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.443807] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.154803] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.448287] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.420060] systemd-fstab-generator[640]: Ignoring "noauto" for root device
	[  +0.113992] systemd-fstab-generator[651]: Ignoring "noauto" for root device
	[  +0.136266] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.103552] systemd-fstab-generator[675]: Ignoring "noauto" for root device
	[  +0.202475] systemd-fstab-generator[699]: Ignoring "noauto" for root device
	[ +16.609837] systemd-fstab-generator[913]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [8c8b737c47dddde6ec1ee290b97efdfb35cac2e548e52424e1c461ede53fea0f] <==
	* {"level":"info","ts":"2023-12-06T19:13:53.875771Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-06T19:13:53.875852Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-06T19:13:53.874501Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4d3edba9e42b28c switched to configuration voters=(17641705551115235980)"}
	{"level":"info","ts":"2023-12-06T19:13:53.876125Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9838e9e2cfdaeabf","local-member-id":"f4d3edba9e42b28c","added-peer-id":"f4d3edba9e42b28c","added-peer-peer-urls":["https://192.168.39.125:2380"]}
	{"level":"info","ts":"2023-12-06T19:13:53.876707Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9838e9e2cfdaeabf","local-member-id":"f4d3edba9e42b28c","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-06T19:13:53.876796Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-06T19:13:53.882074Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-12-06T19:13:53.882295Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f4d3edba9e42b28c","initial-advertise-peer-urls":["https://192.168.39.125:2380"],"listen-peer-urls":["https://192.168.39.125:2380"],"advertise-client-urls":["https://192.168.39.125:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.125:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-12-06T19:13:53.882344Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-12-06T19:13:53.882418Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.125:2380"}
	{"level":"info","ts":"2023-12-06T19:13:53.882441Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.125:2380"}
	{"level":"info","ts":"2023-12-06T19:13:55.124649Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4d3edba9e42b28c is starting a new election at term 2"}
	{"level":"info","ts":"2023-12-06T19:13:55.124687Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4d3edba9e42b28c became pre-candidate at term 2"}
	{"level":"info","ts":"2023-12-06T19:13:55.1247Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4d3edba9e42b28c received MsgPreVoteResp from f4d3edba9e42b28c at term 2"}
	{"level":"info","ts":"2023-12-06T19:13:55.124716Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4d3edba9e42b28c became candidate at term 3"}
	{"level":"info","ts":"2023-12-06T19:13:55.124721Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4d3edba9e42b28c received MsgVoteResp from f4d3edba9e42b28c at term 3"}
	{"level":"info","ts":"2023-12-06T19:13:55.124729Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4d3edba9e42b28c became leader at term 3"}
	{"level":"info","ts":"2023-12-06T19:13:55.124736Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f4d3edba9e42b28c elected leader f4d3edba9e42b28c at term 3"}
	{"level":"info","ts":"2023-12-06T19:13:55.126498Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f4d3edba9e42b28c","local-member-attributes":"{Name:multinode-593099 ClientURLs:[https://192.168.39.125:2379]}","request-path":"/0/members/f4d3edba9e42b28c/attributes","cluster-id":"9838e9e2cfdaeabf","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-06T19:13:55.126721Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-06T19:13:55.126859Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-06T19:13:55.127808Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.125:2379"}
	{"level":"info","ts":"2023-12-06T19:13:55.127924Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-06T19:13:55.133628Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-06T19:13:55.133703Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  19:17:41 up 4 min,  0 users,  load average: 0.29, 0.32, 0.15
	Linux multinode-593099 5.10.57 #1 SMP Fri Dec 1 04:24:04 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [e2c300d5e77d967e3a21edb35af30c9156dbabf1e7b55a2bec92c866ec6f77ca] <==
	* I1206 19:16:52.638099       1 main.go:250] Node multinode-593099-m03 has CIDR [10.244.3.0/24] 
	I1206 19:17:02.651972       1 main.go:223] Handling node with IPs: map[192.168.39.125:{}]
	I1206 19:17:02.728488       1 main.go:227] handling current node
	I1206 19:17:02.728601       1 main.go:223] Handling node with IPs: map[192.168.39.6:{}]
	I1206 19:17:02.728618       1 main.go:250] Node multinode-593099-m02 has CIDR [10.244.1.0/24] 
	I1206 19:17:02.728795       1 main.go:223] Handling node with IPs: map[192.168.39.194:{}]
	I1206 19:17:02.728841       1 main.go:250] Node multinode-593099-m03 has CIDR [10.244.3.0/24] 
	I1206 19:17:12.740864       1 main.go:223] Handling node with IPs: map[192.168.39.125:{}]
	I1206 19:17:12.740921       1 main.go:227] handling current node
	I1206 19:17:12.740935       1 main.go:223] Handling node with IPs: map[192.168.39.6:{}]
	I1206 19:17:12.740940       1 main.go:250] Node multinode-593099-m02 has CIDR [10.244.1.0/24] 
	I1206 19:17:12.741087       1 main.go:223] Handling node with IPs: map[192.168.39.194:{}]
	I1206 19:17:12.741120       1 main.go:250] Node multinode-593099-m03 has CIDR [10.244.3.0/24] 
	I1206 19:17:22.746616       1 main.go:223] Handling node with IPs: map[192.168.39.125:{}]
	I1206 19:17:22.746749       1 main.go:227] handling current node
	I1206 19:17:22.746777       1 main.go:223] Handling node with IPs: map[192.168.39.6:{}]
	I1206 19:17:22.746797       1 main.go:250] Node multinode-593099-m02 has CIDR [10.244.1.0/24] 
	I1206 19:17:22.746936       1 main.go:223] Handling node with IPs: map[192.168.39.194:{}]
	I1206 19:17:22.746958       1 main.go:250] Node multinode-593099-m03 has CIDR [10.244.3.0/24] 
	I1206 19:17:32.761732       1 main.go:223] Handling node with IPs: map[192.168.39.125:{}]
	I1206 19:17:32.762177       1 main.go:227] handling current node
	I1206 19:17:32.762209       1 main.go:223] Handling node with IPs: map[192.168.39.6:{}]
	I1206 19:17:32.762232       1 main.go:250] Node multinode-593099-m02 has CIDR [10.244.1.0/24] 
	I1206 19:17:32.762358       1 main.go:223] Handling node with IPs: map[192.168.39.194:{}]
	I1206 19:17:32.762380       1 main.go:250] Node multinode-593099-m03 has CIDR [10.244.3.0/24] 
	
	* 
	* ==> kube-apiserver [de47f2ff329f73b3ba009a3cc3adacbeb0789b3a321ed17eecefc5e80a4b8c3d] <==
	* I1206 19:13:56.438980       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1206 19:13:56.438996       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1206 19:13:56.496816       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I1206 19:13:56.496856       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I1206 19:13:56.496865       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1206 19:13:56.506644       1 aggregator.go:166] initial CRD sync complete...
	I1206 19:13:56.506681       1 autoregister_controller.go:141] Starting autoregister controller
	I1206 19:13:56.506688       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1206 19:13:56.506694       1 cache.go:39] Caches are synced for autoregister controller
	I1206 19:13:56.522888       1 shared_informer.go:318] Caches are synced for configmaps
	I1206 19:13:56.535733       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1206 19:13:56.538768       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1206 19:13:56.589850       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1206 19:13:56.638091       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1206 19:13:56.640237       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1206 19:13:56.640298       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1206 19:13:56.651382       1 shared_informer.go:318] Caches are synced for node_authorizer
	E1206 19:13:56.658449       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1206 19:13:57.442361       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1206 19:13:59.010048       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1206 19:13:59.254761       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1206 19:13:59.269812       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1206 19:13:59.405626       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1206 19:13:59.419488       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1206 19:14:47.294486       1 controller.go:624] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-controller-manager [fa699613722652713926b5f6b13e591cabf82055259700a80a1f4051018156ca] <==
	* I1206 19:15:51.794369       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-593099-m03"
	I1206 19:15:51.794849       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-shdgj" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-shdgj"
	I1206 19:15:51.818733       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-593099-m02" podCIDRs=["10.244.1.0/24"]
	I1206 19:15:51.944880       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-593099-m02"
	I1206 19:15:52.696754       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="56.807µs"
	I1206 19:16:05.989047       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="126.38µs"
	I1206 19:16:06.564976       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="137.762µs"
	I1206 19:16:06.575989       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="85.848µs"
	I1206 19:16:29.142200       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-593099-m02"
	I1206 19:17:32.783791       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-h9jdf"
	I1206 19:17:32.797089       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="25.434614ms"
	I1206 19:17:32.819062       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="21.851212ms"
	I1206 19:17:32.819218       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="63.612µs"
	I1206 19:17:32.819268       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="16.236µs"
	I1206 19:17:32.832160       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="111.902µs"
	I1206 19:17:33.849973       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="8.729949ms"
	I1206 19:17:33.850498       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="125.374µs"
	I1206 19:17:35.775160       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="77.624µs"
	I1206 19:17:35.798246       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-593099-m02"
	I1206 19:17:36.647220       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-593099-m03\" does not exist"
	I1206 19:17:36.648747       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-593099-m02"
	I1206 19:17:36.649270       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-5d8qw" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-5d8qw"
	I1206 19:17:36.665025       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-593099-m03" podCIDRs=["10.244.2.0/24"]
	I1206 19:17:36.688077       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-593099-m02"
	I1206 19:17:37.528094       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="63.22µs"
	
	* 
	* ==> kube-proxy [9140d654442db5c80850a653b06c0adc874f3292f581f5e2a60139455cab9654] <==
	* I1206 19:13:59.669043       1 server_others.go:69] "Using iptables proxy"
	I1206 19:13:59.680807       1 node.go:141] Successfully retrieved node IP: 192.168.39.125
	I1206 19:13:59.729292       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1206 19:13:59.729366       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1206 19:13:59.737451       1 server_others.go:152] "Using iptables Proxier"
	I1206 19:13:59.737653       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1206 19:13:59.737829       1 server.go:846] "Version info" version="v1.28.4"
	I1206 19:13:59.738068       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 19:13:59.739994       1 config.go:188] "Starting service config controller"
	I1206 19:13:59.740049       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1206 19:13:59.740082       1 config.go:97] "Starting endpoint slice config controller"
	I1206 19:13:59.740098       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1206 19:13:59.740926       1 config.go:315] "Starting node config controller"
	I1206 19:13:59.740966       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1206 19:13:59.841243       1 shared_informer.go:318] Caches are synced for node config
	I1206 19:13:59.841331       1 shared_informer.go:318] Caches are synced for service config
	I1206 19:13:59.841382       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [acb5e4dad04660dddf08d1a55d990c4578fab59d77ed381aebd4d26801debafe] <==
	* I1206 19:13:54.058230       1 serving.go:348] Generated self-signed cert in-memory
	W1206 19:13:56.540137       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1206 19:13:56.540214       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1206 19:13:56.540243       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1206 19:13:56.540268       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1206 19:13:56.574187       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I1206 19:13:56.574268       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 19:13:56.578078       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 19:13:56.578201       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1206 19:13:56.578483       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1206 19:13:56.578618       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1206 19:13:56.679449       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-12-06 19:13:21 UTC, ends at Wed 2023-12-06 19:17:41 UTC. --
	Dec 06 19:13:59 multinode-593099 kubelet[919]: E1206 19:13:59.160407     919 projected.go:198] Error preparing data for projected volume kube-api-access-pk2vp for pod default/busybox-5bc68d56bd-x24l4: object "default"/"kube-root-ca.crt" not registered
	Dec 06 19:13:59 multinode-593099 kubelet[919]: E1206 19:13:59.160451     919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b2c96072-6364-4b62-9a74-2aa19b4a2e69-kube-api-access-pk2vp podName:b2c96072-6364-4b62-9a74-2aa19b4a2e69 nodeName:}" failed. No retries permitted until 2023-12-06 19:14:01.160438715 +0000 UTC m=+10.890433241 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-pk2vp" (UniqueName: "kubernetes.io/projected/b2c96072-6364-4b62-9a74-2aa19b4a2e69-kube-api-access-pk2vp") pod "busybox-5bc68d56bd-x24l4" (UID: "b2c96072-6364-4b62-9a74-2aa19b4a2e69") : object "default"/"kube-root-ca.crt" not registered
	Dec 06 19:13:59 multinode-593099 kubelet[919]: E1206 19:13:59.544361     919 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5bc68d56bd-x24l4" podUID="b2c96072-6364-4b62-9a74-2aa19b4a2e69"
	Dec 06 19:14:00 multinode-593099 kubelet[919]: E1206 19:14:00.544112     919 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-h6rcq" podUID="85247dde-4cee-482e-8f9b-a9e8f4e7172e"
	Dec 06 19:14:01 multinode-593099 kubelet[919]: E1206 19:14:01.074356     919 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 06 19:14:01 multinode-593099 kubelet[919]: E1206 19:14:01.074484     919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/85247dde-4cee-482e-8f9b-a9e8f4e7172e-config-volume podName:85247dde-4cee-482e-8f9b-a9e8f4e7172e nodeName:}" failed. No retries permitted until 2023-12-06 19:14:05.074468522 +0000 UTC m=+14.804463054 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/85247dde-4cee-482e-8f9b-a9e8f4e7172e-config-volume") pod "coredns-5dd5756b68-h6rcq" (UID: "85247dde-4cee-482e-8f9b-a9e8f4e7172e") : object "kube-system"/"coredns" not registered
	Dec 06 19:14:01 multinode-593099 kubelet[919]: E1206 19:14:01.175000     919 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Dec 06 19:14:01 multinode-593099 kubelet[919]: E1206 19:14:01.175058     919 projected.go:198] Error preparing data for projected volume kube-api-access-pk2vp for pod default/busybox-5bc68d56bd-x24l4: object "default"/"kube-root-ca.crt" not registered
	Dec 06 19:14:01 multinode-593099 kubelet[919]: E1206 19:14:01.175108     919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b2c96072-6364-4b62-9a74-2aa19b4a2e69-kube-api-access-pk2vp podName:b2c96072-6364-4b62-9a74-2aa19b4a2e69 nodeName:}" failed. No retries permitted until 2023-12-06 19:14:05.175095341 +0000 UTC m=+14.905089868 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-pk2vp" (UniqueName: "kubernetes.io/projected/b2c96072-6364-4b62-9a74-2aa19b4a2e69-kube-api-access-pk2vp") pod "busybox-5bc68d56bd-x24l4" (UID: "b2c96072-6364-4b62-9a74-2aa19b4a2e69") : object "default"/"kube-root-ca.crt" not registered
	Dec 06 19:14:01 multinode-593099 kubelet[919]: E1206 19:14:01.544644     919 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5bc68d56bd-x24l4" podUID="b2c96072-6364-4b62-9a74-2aa19b4a2e69"
	Dec 06 19:14:02 multinode-593099 kubelet[919]: E1206 19:14:02.543707     919 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-h6rcq" podUID="85247dde-4cee-482e-8f9b-a9e8f4e7172e"
	Dec 06 19:14:02 multinode-593099 kubelet[919]: I1206 19:14:02.819958     919 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Dec 06 19:14:29 multinode-593099 kubelet[919]: I1206 19:14:29.729933     919 scope.go:117] "RemoveContainer" containerID="4061ed41067ead9272fc4dce4c7ccc3fd971c38df3c12f2d801d2723da56fa41"
	Dec 06 19:14:50 multinode-593099 kubelet[919]: E1206 19:14:50.566861     919 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 06 19:14:50 multinode-593099 kubelet[919]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 06 19:14:50 multinode-593099 kubelet[919]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 06 19:14:50 multinode-593099 kubelet[919]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 06 19:15:50 multinode-593099 kubelet[919]: E1206 19:15:50.559870     919 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 06 19:15:50 multinode-593099 kubelet[919]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 06 19:15:50 multinode-593099 kubelet[919]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 06 19:15:50 multinode-593099 kubelet[919]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 06 19:16:50 multinode-593099 kubelet[919]: E1206 19:16:50.565402     919 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 06 19:16:50 multinode-593099 kubelet[919]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 06 19:16:50 multinode-593099 kubelet[919]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 06 19:16:50 multinode-593099 kubelet[919]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-593099 -n multinode-593099
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-593099 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (693.02s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (143.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-amd64 -p multinode-593099 stop
E1206 19:17:54.632524   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/functional-317483/client.crt: no such file or directory
E1206 19:18:22.657330   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/addons-463584/client.crt: no such file or directory
multinode_test.go:342: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-593099 stop: exit status 82 (2m1.755699051s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-593099"  ...
	* Stopping node "multinode-593099"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:344: node stop returned an error. args "out/minikube-linux-amd64 -p multinode-593099 stop": exit status 82
multinode_test.go:348: (dbg) Run:  out/minikube-linux-amd64 -p multinode-593099 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-593099 status: exit status 3 (18.743898389s)

                                                
                                                
-- stdout --
	multinode-593099
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	multinode-593099-m02
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 19:20:04.845551   89528 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.125:22: connect: no route to host
	E1206 19:20:04.845591   89528 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.125:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:351: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-593099 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-593099 -n multinode-593099
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p multinode-593099 -n multinode-593099: exit status 3 (3.193545409s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 19:20:08.205563   89634 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.125:22: connect: no route to host
	E1206 19:20:08.205585   89634 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.125:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "multinode-593099" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiNode/serial/StopMultiNode (143.69s)

                                                
                                    
x
+
TestPreload (274.57s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-728164 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-728164 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m13.127480156s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-728164 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-728164 image pull gcr.io/k8s-minikube/busybox: (1.045051475s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-728164
E1206 19:30:51.525253   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/client.crt: no such file or directory
E1206 19:30:57.680181   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/functional-317483/client.crt: no such file or directory
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-728164: exit status 82 (2m0.837533191s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-728164"  ...
	* Stopping node "test-preload-728164"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-728164 failed: exit status 82
panic.go:523: *** TestPreload FAILED at 2023-12-06 19:32:40.584209653 +0000 UTC m=+3134.668703475
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-728164 -n test-preload-728164
E1206 19:32:54.632943   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/functional-317483/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-728164 -n test-preload-728164: exit status 3 (18.66710094s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 19:32:59.245667   92710 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.206:22: connect: no route to host
	E1206 19:32:59.245729   92710 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.206:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-728164" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-728164" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-728164
--- FAIL: TestPreload (274.57s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (139.64s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.6.2.4096142650.exe start -p running-upgrade-832296 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E1206 19:35:51.526041   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/client.crt: no such file or directory
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.6.2.4096142650.exe start -p running-upgrade-832296 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m13.712350281s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-832296 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p running-upgrade-832296 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90 (3.882313268s)

                                                
                                                
-- stdout --
	* [running-upgrade-832296] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17740
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17740-63652/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17740-63652/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the kvm2 driver based on existing profile
	* Starting control plane node running-upgrade-832296 in cluster running-upgrade-832296
	* Updating the running kvm2 "running-upgrade-832296" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 19:37:26.892948   97925 out.go:296] Setting OutFile to fd 1 ...
	I1206 19:37:26.893119   97925 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 19:37:26.893137   97925 out.go:309] Setting ErrFile to fd 2...
	I1206 19:37:26.893145   97925 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 19:37:26.893401   97925 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17740-63652/.minikube/bin
	I1206 19:37:26.894253   97925 out.go:303] Setting JSON to false
	I1206 19:37:26.895337   97925 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":8397,"bootTime":1701883050,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 19:37:26.895404   97925 start.go:138] virtualization: kvm guest
	I1206 19:37:26.897484   97925 out.go:177] * [running-upgrade-832296] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1206 19:37:26.899314   97925 out.go:177]   - MINIKUBE_LOCATION=17740
	I1206 19:37:26.899392   97925 notify.go:220] Checking for updates...
	I1206 19:37:26.900783   97925 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 19:37:26.902500   97925 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17740-63652/kubeconfig
	I1206 19:37:26.904013   97925 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17740-63652/.minikube
	I1206 19:37:26.906369   97925 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 19:37:26.908020   97925 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 19:37:26.909855   97925 config.go:182] Loaded profile config "running-upgrade-832296": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1206 19:37:26.909871   97925 start_flags.go:694] config upgrade: Driver=kvm2
	I1206 19:37:26.909880   97925 start_flags.go:706] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f
	I1206 19:37:26.909934   97925 profile.go:148] Saving config to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/running-upgrade-832296/config.json ...
	I1206 19:37:26.910563   97925 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:37:26.910633   97925 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:37:26.925411   97925 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37343
	I1206 19:37:26.925888   97925 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:37:26.926482   97925 main.go:141] libmachine: Using API Version  1
	I1206 19:37:26.926522   97925 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:37:26.926934   97925 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:37:26.927137   97925 main.go:141] libmachine: (running-upgrade-832296) Calling .DriverName
	I1206 19:37:26.929394   97925 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I1206 19:37:26.930722   97925 driver.go:392] Setting default libvirt URI to qemu:///system
	I1206 19:37:26.931136   97925 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:37:26.931189   97925 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:37:26.946467   97925 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40921
	I1206 19:37:26.946901   97925 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:37:26.947354   97925 main.go:141] libmachine: Using API Version  1
	I1206 19:37:26.947388   97925 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:37:26.947769   97925 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:37:26.947944   97925 main.go:141] libmachine: (running-upgrade-832296) Calling .DriverName
	I1206 19:37:26.987812   97925 out.go:177] * Using the kvm2 driver based on existing profile
	I1206 19:37:26.989176   97925 start.go:298] selected driver: kvm2
	I1206 19:37:26.989188   97925 start.go:902] validating driver "kvm2" against &{Name:running-upgrade-832296 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 Clust
erName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.101 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1206 19:37:26.989338   97925 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 19:37:26.990091   97925 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 19:37:26.990182   97925 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17740-63652/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1206 19:37:27.006148   97925 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1206 19:37:27.006588   97925 cni.go:84] Creating CNI manager for ""
	I1206 19:37:27.006614   97925 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I1206 19:37:27.006631   97925 start_flags.go:323] config:
	{Name:running-upgrade-832296 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.101 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1206 19:37:27.006919   97925 iso.go:125] acquiring lock: {Name:mk6e9c7dc90243dab7d2a6f322b4b6abe4dff6ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 19:37:27.008803   97925 out.go:177] * Starting control plane node running-upgrade-832296 in cluster running-upgrade-832296
	I1206 19:37:27.010377   97925 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime crio
	W1206 19:37:27.048173   97925 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1206 19:37:27.048335   97925 profile.go:148] Saving config to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/running-upgrade-832296/config.json ...
	I1206 19:37:27.048430   97925 cache.go:107] acquiring lock: {Name:mk5195646e7dd0f79e637be96dc35fbc12e472e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 19:37:27.048461   97925 cache.go:107] acquiring lock: {Name:mk6e4e65abd6f9df232608024e5f23f46678723c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 19:37:27.048479   97925 cache.go:107] acquiring lock: {Name:mk2c69bb29434615460b788bdb15e044eb4f10b3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 19:37:27.048484   97925 cache.go:107] acquiring lock: {Name:mk604bb84a1cb9fbf2b55c38820113aec076231a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 19:37:27.048537   97925 cache.go:115] /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1206 19:37:27.048547   97925 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 134.937µs
	I1206 19:37:27.048557   97925 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1206 19:37:27.048573   97925 cache.go:107] acquiring lock: {Name:mkf8f023ea46a22353d294cf75f7b0593d247c42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 19:37:27.048600   97925 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.17.0
	I1206 19:37:27.048599   97925 start.go:365] acquiring machines lock for running-upgrade-832296: {Name:mk49ce640266d8c664a871ed4989f65c26b6fae1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1206 19:37:27.048643   97925 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.17.0
	I1206 19:37:27.048656   97925 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1206 19:37:27.048696   97925 start.go:369] acquired machines lock for "running-upgrade-832296" in 74.509µs
	I1206 19:37:27.048712   97925 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.5
	I1206 19:37:27.048731   97925 start.go:96] Skipping create...Using existing machine configuration
	I1206 19:37:27.048738   97925 fix.go:54] fixHost starting: minikube
	I1206 19:37:27.048438   97925 cache.go:107] acquiring lock: {Name:mkebcc4725073a16dd6ef66040e9e24922311f73 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 19:37:27.048871   97925 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.17.0
	I1206 19:37:27.048873   97925 cache.go:107] acquiring lock: {Name:mkcffad656ee0d7e4d7e8c338c0442155a3917fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 19:37:27.048925   97925 cache.go:107] acquiring lock: {Name:mka5144732d6393a384f4a32452dd1d86fb27d83 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 19:37:27.048987   97925 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.17.0
	I1206 19:37:27.049056   97925 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1206 19:37:27.049140   97925 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:37:27.049169   97925 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:37:27.050014   97925 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1206 19:37:27.050055   97925 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.5: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.5
	I1206 19:37:27.050111   97925 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.17.0
	I1206 19:37:27.050213   97925 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1206 19:37:27.050213   97925 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.17.0
	I1206 19:37:27.050287   97925 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.17.0
	I1206 19:37:27.050364   97925 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.17.0
	I1206 19:37:27.068811   97925 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34239
	I1206 19:37:27.069347   97925 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:37:27.069973   97925 main.go:141] libmachine: Using API Version  1
	I1206 19:37:27.069999   97925 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:37:27.070397   97925 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:37:27.070606   97925 main.go:141] libmachine: (running-upgrade-832296) Calling .DriverName
	I1206 19:37:27.070801   97925 main.go:141] libmachine: (running-upgrade-832296) Calling .GetState
	I1206 19:37:27.073071   97925 fix.go:102] recreateIfNeeded on running-upgrade-832296: state=Running err=<nil>
	W1206 19:37:27.073104   97925 fix.go:128] unexpected machine state, will restart: <nil>
	I1206 19:37:27.173420   97925 out.go:177] * Updating the running kvm2 "running-upgrade-832296" VM ...
	I1206 19:37:27.201677   97925 machine.go:88] provisioning docker machine ...
	I1206 19:37:27.201717   97925 main.go:141] libmachine: (running-upgrade-832296) Calling .DriverName
	I1206 19:37:27.202065   97925 main.go:141] libmachine: (running-upgrade-832296) Calling .GetMachineName
	I1206 19:37:27.202302   97925 buildroot.go:166] provisioning hostname "running-upgrade-832296"
	I1206 19:37:27.202336   97925 main.go:141] libmachine: (running-upgrade-832296) Calling .GetMachineName
	I1206 19:37:27.202520   97925 main.go:141] libmachine: (running-upgrade-832296) Calling .GetSSHHostname
	I1206 19:37:27.205896   97925 main.go:141] libmachine: (running-upgrade-832296) DBG | domain running-upgrade-832296 has defined MAC address 52:54:00:c6:e3:0b in network minikube-net
	I1206 19:37:27.206435   97925 main.go:141] libmachine: (running-upgrade-832296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:e3:0b", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-06 20:35:46 +0000 UTC Type:0 Mac:52:54:00:c6:e3:0b Iaid: IPaddr:192.168.50.101 Prefix:24 Hostname:running-upgrade-832296 Clientid:01:52:54:00:c6:e3:0b}
	I1206 19:37:27.206461   97925 main.go:141] libmachine: (running-upgrade-832296) DBG | domain running-upgrade-832296 has defined IP address 192.168.50.101 and MAC address 52:54:00:c6:e3:0b in network minikube-net
	I1206 19:37:27.206615   97925 main.go:141] libmachine: (running-upgrade-832296) Calling .GetSSHPort
	I1206 19:37:27.206827   97925 main.go:141] libmachine: (running-upgrade-832296) Calling .GetSSHKeyPath
	I1206 19:37:27.207000   97925 main.go:141] libmachine: (running-upgrade-832296) Calling .GetSSHKeyPath
	I1206 19:37:27.207155   97925 main.go:141] libmachine: (running-upgrade-832296) Calling .GetSSHUsername
	I1206 19:37:27.207331   97925 main.go:141] libmachine: Using SSH client type: native
	I1206 19:37:27.207704   97925 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.101 22 <nil> <nil>}
	I1206 19:37:27.207727   97925 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-832296 && echo "running-upgrade-832296" | sudo tee /etc/hostname
	I1206 19:37:27.237141   97925 cache.go:162] opening:  /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1206 19:37:27.310967   97925 cache.go:157] /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I1206 19:37:27.310992   97925 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 262.105181ms
	I1206 19:37:27.311004   97925 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I1206 19:37:27.344689   97925 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-832296
	
	I1206 19:37:27.344725   97925 main.go:141] libmachine: (running-upgrade-832296) Calling .GetSSHHostname
	I1206 19:37:27.347772   97925 main.go:141] libmachine: (running-upgrade-832296) DBG | domain running-upgrade-832296 has defined MAC address 52:54:00:c6:e3:0b in network minikube-net
	I1206 19:37:27.348217   97925 main.go:141] libmachine: (running-upgrade-832296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:e3:0b", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-06 20:35:46 +0000 UTC Type:0 Mac:52:54:00:c6:e3:0b Iaid: IPaddr:192.168.50.101 Prefix:24 Hostname:running-upgrade-832296 Clientid:01:52:54:00:c6:e3:0b}
	I1206 19:37:27.348265   97925 main.go:141] libmachine: (running-upgrade-832296) DBG | domain running-upgrade-832296 has defined IP address 192.168.50.101 and MAC address 52:54:00:c6:e3:0b in network minikube-net
	I1206 19:37:27.348350   97925 main.go:141] libmachine: (running-upgrade-832296) Calling .GetSSHPort
	I1206 19:37:27.348577   97925 main.go:141] libmachine: (running-upgrade-832296) Calling .GetSSHKeyPath
	I1206 19:37:27.348765   97925 main.go:141] libmachine: (running-upgrade-832296) Calling .GetSSHKeyPath
	I1206 19:37:27.348964   97925 main.go:141] libmachine: (running-upgrade-832296) Calling .GetSSHUsername
	I1206 19:37:27.349172   97925 main.go:141] libmachine: Using SSH client type: native
	I1206 19:37:27.349525   97925 cache.go:162] opening:  /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0
	I1206 19:37:27.349574   97925 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.101 22 <nil> <nil>}
	I1206 19:37:27.349596   97925 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-832296' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-832296/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-832296' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 19:37:27.363271   97925 cache.go:162] opening:  /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I1206 19:37:27.365073   97925 cache.go:162] opening:  /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0
	I1206 19:37:27.369976   97925 cache.go:162] opening:  /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5
	I1206 19:37:27.378962   97925 cache.go:162] opening:  /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0
	I1206 19:37:27.450545   97925 cache.go:162] opening:  /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0
	I1206 19:37:27.502671   97925 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1206 19:37:27.502703   97925 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17740-63652/.minikube CaCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17740-63652/.minikube}
	I1206 19:37:27.502725   97925 buildroot.go:174] setting up certificates
	I1206 19:37:27.502799   97925 provision.go:83] configureAuth start
	I1206 19:37:27.502822   97925 main.go:141] libmachine: (running-upgrade-832296) Calling .GetMachineName
	I1206 19:37:27.503515   97925 main.go:141] libmachine: (running-upgrade-832296) Calling .GetIP
	I1206 19:37:27.508242   97925 main.go:141] libmachine: (running-upgrade-832296) DBG | domain running-upgrade-832296 has defined MAC address 52:54:00:c6:e3:0b in network minikube-net
	I1206 19:37:27.509029   97925 main.go:141] libmachine: (running-upgrade-832296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:e3:0b", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-06 20:35:46 +0000 UTC Type:0 Mac:52:54:00:c6:e3:0b Iaid: IPaddr:192.168.50.101 Prefix:24 Hostname:running-upgrade-832296 Clientid:01:52:54:00:c6:e3:0b}
	I1206 19:37:27.509065   97925 main.go:141] libmachine: (running-upgrade-832296) DBG | domain running-upgrade-832296 has defined IP address 192.168.50.101 and MAC address 52:54:00:c6:e3:0b in network minikube-net
	I1206 19:37:27.509378   97925 main.go:141] libmachine: (running-upgrade-832296) Calling .GetSSHHostname
	I1206 19:37:27.513596   97925 main.go:141] libmachine: (running-upgrade-832296) DBG | domain running-upgrade-832296 has defined MAC address 52:54:00:c6:e3:0b in network minikube-net
	I1206 19:37:27.514038   97925 main.go:141] libmachine: (running-upgrade-832296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:e3:0b", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-06 20:35:46 +0000 UTC Type:0 Mac:52:54:00:c6:e3:0b Iaid: IPaddr:192.168.50.101 Prefix:24 Hostname:running-upgrade-832296 Clientid:01:52:54:00:c6:e3:0b}
	I1206 19:37:27.514083   97925 main.go:141] libmachine: (running-upgrade-832296) DBG | domain running-upgrade-832296 has defined IP address 192.168.50.101 and MAC address 52:54:00:c6:e3:0b in network minikube-net
	I1206 19:37:27.514152   97925 provision.go:138] copyHostCerts
	I1206 19:37:27.514197   97925 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem, removing ...
	I1206 19:37:27.514210   97925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem
	I1206 19:37:27.514266   97925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem (1082 bytes)
	I1206 19:37:27.514379   97925 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem, removing ...
	I1206 19:37:27.514386   97925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem
	I1206 19:37:27.514414   97925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem (1123 bytes)
	I1206 19:37:27.514501   97925 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem, removing ...
	I1206 19:37:27.514507   97925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem
	I1206 19:37:27.514529   97925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem (1679 bytes)
	I1206 19:37:27.514604   97925 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-832296 san=[192.168.50.101 192.168.50.101 localhost 127.0.0.1 minikube running-upgrade-832296]
	I1206 19:37:27.789579   97925 provision.go:172] copyRemoteCerts
	I1206 19:37:27.789721   97925 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 19:37:27.789781   97925 main.go:141] libmachine: (running-upgrade-832296) Calling .GetSSHHostname
	I1206 19:37:27.793043   97925 main.go:141] libmachine: (running-upgrade-832296) DBG | domain running-upgrade-832296 has defined MAC address 52:54:00:c6:e3:0b in network minikube-net
	I1206 19:37:27.793376   97925 main.go:141] libmachine: (running-upgrade-832296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:e3:0b", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-06 20:35:46 +0000 UTC Type:0 Mac:52:54:00:c6:e3:0b Iaid: IPaddr:192.168.50.101 Prefix:24 Hostname:running-upgrade-832296 Clientid:01:52:54:00:c6:e3:0b}
	I1206 19:37:27.793419   97925 main.go:141] libmachine: (running-upgrade-832296) DBG | domain running-upgrade-832296 has defined IP address 192.168.50.101 and MAC address 52:54:00:c6:e3:0b in network minikube-net
	I1206 19:37:27.793668   97925 main.go:141] libmachine: (running-upgrade-832296) Calling .GetSSHPort
	I1206 19:37:27.793859   97925 main.go:141] libmachine: (running-upgrade-832296) Calling .GetSSHKeyPath
	I1206 19:37:27.793970   97925 main.go:141] libmachine: (running-upgrade-832296) Calling .GetSSHUsername
	I1206 19:37:27.794121   97925 sshutil.go:53] new ssh client: &{IP:192.168.50.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/running-upgrade-832296/id_rsa Username:docker}
	I1206 19:37:27.893116   97925 cache.go:157] /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 exists
	I1206 19:37:27.893155   97925 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "/home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5" took 844.585027ms
	I1206 19:37:27.893170   97925 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 succeeded
	I1206 19:37:27.896362   97925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 19:37:27.914261   97925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 19:37:27.932647   97925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1206 19:37:27.949278   97925 provision.go:86] duration metric: configureAuth took 446.456902ms
	I1206 19:37:27.949319   97925 buildroot.go:189] setting minikube options for container-runtime
	I1206 19:37:27.949519   97925 config.go:182] Loaded profile config "running-upgrade-832296": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1206 19:37:27.949602   97925 main.go:141] libmachine: (running-upgrade-832296) Calling .GetSSHHostname
	I1206 19:37:27.952705   97925 main.go:141] libmachine: (running-upgrade-832296) DBG | domain running-upgrade-832296 has defined MAC address 52:54:00:c6:e3:0b in network minikube-net
	I1206 19:37:27.953139   97925 main.go:141] libmachine: (running-upgrade-832296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:e3:0b", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-06 20:35:46 +0000 UTC Type:0 Mac:52:54:00:c6:e3:0b Iaid: IPaddr:192.168.50.101 Prefix:24 Hostname:running-upgrade-832296 Clientid:01:52:54:00:c6:e3:0b}
	I1206 19:37:27.953168   97925 main.go:141] libmachine: (running-upgrade-832296) DBG | domain running-upgrade-832296 has defined IP address 192.168.50.101 and MAC address 52:54:00:c6:e3:0b in network minikube-net
	I1206 19:37:27.953414   97925 main.go:141] libmachine: (running-upgrade-832296) Calling .GetSSHPort
	I1206 19:37:27.953634   97925 main.go:141] libmachine: (running-upgrade-832296) Calling .GetSSHKeyPath
	I1206 19:37:27.953887   97925 main.go:141] libmachine: (running-upgrade-832296) Calling .GetSSHKeyPath
	I1206 19:37:27.954087   97925 main.go:141] libmachine: (running-upgrade-832296) Calling .GetSSHUsername
	I1206 19:37:27.954306   97925 main.go:141] libmachine: Using SSH client type: native
	I1206 19:37:27.954799   97925 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.101 22 <nil> <nil>}
	I1206 19:37:27.954825   97925 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 19:37:28.210807   97925 cache.go:157] /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 exists
	I1206 19:37:28.210850   97925 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "/home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0" took 1.162371141s
	I1206 19:37:28.210871   97925 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 succeeded
	I1206 19:37:28.335390   97925 cache.go:157] /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 exists
	I1206 19:37:28.335450   97925 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "/home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0" took 1.287005752s
	I1206 19:37:28.335471   97925 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 succeeded
	I1206 19:37:28.411565   97925 cache.go:157] /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 exists
	I1206 19:37:28.411594   97925 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "/home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0" took 1.362723742s
	I1206 19:37:28.411611   97925 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 succeeded
	I1206 19:37:28.553177   97925 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 19:37:28.553204   97925 machine.go:91] provisioned docker machine in 1.351502339s
	I1206 19:37:28.553214   97925 start.go:300] post-start starting for "running-upgrade-832296" (driver="kvm2")
	I1206 19:37:28.553224   97925 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 19:37:28.553262   97925 main.go:141] libmachine: (running-upgrade-832296) Calling .DriverName
	I1206 19:37:28.553675   97925 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 19:37:28.553710   97925 main.go:141] libmachine: (running-upgrade-832296) Calling .GetSSHHostname
	I1206 19:37:28.557441   97925 main.go:141] libmachine: (running-upgrade-832296) DBG | domain running-upgrade-832296 has defined MAC address 52:54:00:c6:e3:0b in network minikube-net
	I1206 19:37:28.557879   97925 main.go:141] libmachine: (running-upgrade-832296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:e3:0b", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-06 20:35:46 +0000 UTC Type:0 Mac:52:54:00:c6:e3:0b Iaid: IPaddr:192.168.50.101 Prefix:24 Hostname:running-upgrade-832296 Clientid:01:52:54:00:c6:e3:0b}
	I1206 19:37:28.557910   97925 main.go:141] libmachine: (running-upgrade-832296) DBG | domain running-upgrade-832296 has defined IP address 192.168.50.101 and MAC address 52:54:00:c6:e3:0b in network minikube-net
	I1206 19:37:28.558036   97925 main.go:141] libmachine: (running-upgrade-832296) Calling .GetSSHPort
	I1206 19:37:28.558313   97925 main.go:141] libmachine: (running-upgrade-832296) Calling .GetSSHKeyPath
	I1206 19:37:28.558489   97925 main.go:141] libmachine: (running-upgrade-832296) Calling .GetSSHUsername
	I1206 19:37:28.558984   97925 sshutil.go:53] new ssh client: &{IP:192.168.50.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/running-upgrade-832296/id_rsa Username:docker}
	I1206 19:37:28.644177   97925 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 19:37:28.648925   97925 info.go:137] Remote host: Buildroot 2019.02.7
	I1206 19:37:28.648949   97925 filesync.go:126] Scanning /home/jenkins/minikube-integration/17740-63652/.minikube/addons for local assets ...
	I1206 19:37:28.649010   97925 filesync.go:126] Scanning /home/jenkins/minikube-integration/17740-63652/.minikube/files for local assets ...
	I1206 19:37:28.649086   97925 filesync.go:149] local asset: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem -> 708342.pem in /etc/ssl/certs
	I1206 19:37:28.649196   97925 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 19:37:28.655615   97925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem --> /etc/ssl/certs/708342.pem (1708 bytes)
	I1206 19:37:28.670445   97925 start.go:303] post-start completed in 117.216814ms
	I1206 19:37:28.670467   97925 fix.go:56] fixHost completed within 1.621729595s
	I1206 19:37:28.670488   97925 main.go:141] libmachine: (running-upgrade-832296) Calling .GetSSHHostname
	I1206 19:37:28.673560   97925 main.go:141] libmachine: (running-upgrade-832296) DBG | domain running-upgrade-832296 has defined MAC address 52:54:00:c6:e3:0b in network minikube-net
	I1206 19:37:28.674059   97925 main.go:141] libmachine: (running-upgrade-832296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:e3:0b", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-06 20:35:46 +0000 UTC Type:0 Mac:52:54:00:c6:e3:0b Iaid: IPaddr:192.168.50.101 Prefix:24 Hostname:running-upgrade-832296 Clientid:01:52:54:00:c6:e3:0b}
	I1206 19:37:28.674087   97925 main.go:141] libmachine: (running-upgrade-832296) DBG | domain running-upgrade-832296 has defined IP address 192.168.50.101 and MAC address 52:54:00:c6:e3:0b in network minikube-net
	I1206 19:37:28.674251   97925 main.go:141] libmachine: (running-upgrade-832296) Calling .GetSSHPort
	I1206 19:37:28.674480   97925 main.go:141] libmachine: (running-upgrade-832296) Calling .GetSSHKeyPath
	I1206 19:37:28.674685   97925 main.go:141] libmachine: (running-upgrade-832296) Calling .GetSSHKeyPath
	I1206 19:37:28.674832   97925 main.go:141] libmachine: (running-upgrade-832296) Calling .GetSSHUsername
	I1206 19:37:28.675011   97925 main.go:141] libmachine: Using SSH client type: native
	I1206 19:37:28.675352   97925 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.101 22 <nil> <nil>}
	I1206 19:37:28.675367   97925 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1206 19:37:28.834929   97925 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701891448.830567636
	
	I1206 19:37:28.834959   97925 fix.go:206] guest clock: 1701891448.830567636
	I1206 19:37:28.834999   97925 fix.go:219] Guest: 2023-12-06 19:37:28.830567636 +0000 UTC Remote: 2023-12-06 19:37:28.670470233 +0000 UTC m=+1.834368690 (delta=160.097403ms)
	I1206 19:37:28.835047   97925 fix.go:190] guest clock delta is within tolerance: 160.097403ms
	I1206 19:37:28.835058   97925 start.go:83] releasing machines lock for "running-upgrade-832296", held for 1.786347379s
	I1206 19:37:28.835102   97925 main.go:141] libmachine: (running-upgrade-832296) Calling .DriverName
	I1206 19:37:28.835399   97925 main.go:141] libmachine: (running-upgrade-832296) Calling .GetIP
	I1206 19:37:28.838486   97925 main.go:141] libmachine: (running-upgrade-832296) DBG | domain running-upgrade-832296 has defined MAC address 52:54:00:c6:e3:0b in network minikube-net
	I1206 19:37:28.838982   97925 main.go:141] libmachine: (running-upgrade-832296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:e3:0b", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-06 20:35:46 +0000 UTC Type:0 Mac:52:54:00:c6:e3:0b Iaid: IPaddr:192.168.50.101 Prefix:24 Hostname:running-upgrade-832296 Clientid:01:52:54:00:c6:e3:0b}
	I1206 19:37:28.839016   97925 main.go:141] libmachine: (running-upgrade-832296) DBG | domain running-upgrade-832296 has defined IP address 192.168.50.101 and MAC address 52:54:00:c6:e3:0b in network minikube-net
	I1206 19:37:28.839159   97925 main.go:141] libmachine: (running-upgrade-832296) Calling .DriverName
	I1206 19:37:28.839743   97925 main.go:141] libmachine: (running-upgrade-832296) Calling .DriverName
	I1206 19:37:28.839919   97925 main.go:141] libmachine: (running-upgrade-832296) Calling .DriverName
	I1206 19:37:28.840028   97925 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 19:37:28.840076   97925 main.go:141] libmachine: (running-upgrade-832296) Calling .GetSSHHostname
	I1206 19:37:28.840273   97925 ssh_runner.go:195] Run: cat /version.json
	I1206 19:37:28.840306   97925 main.go:141] libmachine: (running-upgrade-832296) Calling .GetSSHHostname
	I1206 19:37:28.843780   97925 main.go:141] libmachine: (running-upgrade-832296) DBG | domain running-upgrade-832296 has defined MAC address 52:54:00:c6:e3:0b in network minikube-net
	I1206 19:37:28.844060   97925 main.go:141] libmachine: (running-upgrade-832296) DBG | domain running-upgrade-832296 has defined MAC address 52:54:00:c6:e3:0b in network minikube-net
	I1206 19:37:28.844283   97925 main.go:141] libmachine: (running-upgrade-832296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:e3:0b", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-06 20:35:46 +0000 UTC Type:0 Mac:52:54:00:c6:e3:0b Iaid: IPaddr:192.168.50.101 Prefix:24 Hostname:running-upgrade-832296 Clientid:01:52:54:00:c6:e3:0b}
	I1206 19:37:28.844318   97925 main.go:141] libmachine: (running-upgrade-832296) DBG | domain running-upgrade-832296 has defined IP address 192.168.50.101 and MAC address 52:54:00:c6:e3:0b in network minikube-net
	I1206 19:37:28.844497   97925 main.go:141] libmachine: (running-upgrade-832296) Calling .GetSSHPort
	I1206 19:37:28.844500   97925 main.go:141] libmachine: (running-upgrade-832296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:e3:0b", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-06 20:35:46 +0000 UTC Type:0 Mac:52:54:00:c6:e3:0b Iaid: IPaddr:192.168.50.101 Prefix:24 Hostname:running-upgrade-832296 Clientid:01:52:54:00:c6:e3:0b}
	I1206 19:37:28.844538   97925 main.go:141] libmachine: (running-upgrade-832296) DBG | domain running-upgrade-832296 has defined IP address 192.168.50.101 and MAC address 52:54:00:c6:e3:0b in network minikube-net
	I1206 19:37:28.844705   97925 main.go:141] libmachine: (running-upgrade-832296) Calling .GetSSHKeyPath
	I1206 19:37:28.844724   97925 main.go:141] libmachine: (running-upgrade-832296) Calling .GetSSHPort
	I1206 19:37:28.844946   97925 main.go:141] libmachine: (running-upgrade-832296) Calling .GetSSHUsername
	I1206 19:37:28.844958   97925 main.go:141] libmachine: (running-upgrade-832296) Calling .GetSSHKeyPath
	I1206 19:37:28.845110   97925 sshutil.go:53] new ssh client: &{IP:192.168.50.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/running-upgrade-832296/id_rsa Username:docker}
	I1206 19:37:28.845284   97925 main.go:141] libmachine: (running-upgrade-832296) Calling .GetSSHUsername
	I1206 19:37:28.845417   97925 sshutil.go:53] new ssh client: &{IP:192.168.50.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/running-upgrade-832296/id_rsa Username:docker}
	W1206 19:37:28.959202   97925 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1206 19:37:28.998376   97925 cache.go:157] /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I1206 19:37:28.998404   97925 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 1.949924739s
	I1206 19:37:28.998419   97925 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I1206 19:37:29.007525   97925 cache.go:157] /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 exists
	I1206 19:37:29.007551   97925 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "/home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0" took 1.959126165s
	I1206 19:37:29.007563   97925 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 succeeded
	I1206 19:37:29.007579   97925 cache.go:87] Successfully saved all images to host disk.
	I1206 19:37:29.007627   97925 ssh_runner.go:195] Run: systemctl --version
	I1206 19:37:29.013283   97925 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 19:37:29.136122   97925 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 19:37:29.145201   97925 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 19:37:29.145295   97925 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 19:37:29.153031   97925 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 19:37:29.153057   97925 start.go:475] detecting cgroup driver to use...
	I1206 19:37:29.153130   97925 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 19:37:29.167926   97925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 19:37:29.183660   97925 docker.go:203] disabling cri-docker service (if available) ...
	I1206 19:37:29.183730   97925 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 19:37:29.204393   97925 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 19:37:29.216366   97925 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1206 19:37:29.230398   97925 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1206 19:37:29.230482   97925 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 19:37:29.353774   97925 docker.go:219] disabling docker service ...
	I1206 19:37:29.353843   97925 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 19:37:30.371841   97925 ssh_runner.go:235] Completed: sudo systemctl stop -f docker.socket: (1.017959689s)
	I1206 19:37:30.371922   97925 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 19:37:30.388288   97925 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 19:37:30.539854   97925 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 19:37:30.656786   97925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 19:37:30.667911   97925 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 19:37:30.683627   97925 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1206 19:37:30.683701   97925 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:37:30.702971   97925 out.go:177] 
	W1206 19:37:30.704751   97925 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1206 19:37:30.704773   97925 out.go:239] * 
	* 
	W1206 19:37:30.705873   97925 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 19:37:30.707800   97925 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.6.2 to HEAD failed: out/minikube-linux-amd64 start -p running-upgrade-832296 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-12-06 19:37:30.7279262 +0000 UTC m=+3424.812420026
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-832296 -n running-upgrade-832296
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-832296 -n running-upgrade-832296: exit status 4 (320.318372ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 19:37:31.006686   98148 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-832296" does not appear in /home/jenkins/minikube-integration/17740-63652/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-832296" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-832296" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-832296
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-832296: (1.318939218s)
--- FAIL: TestRunningBinaryUpgrade (139.64s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (269.41s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.6.2.718063773.exe start -p stopped-upgrade-936191 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.6.2.718063773.exe start -p stopped-upgrade-936191 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m13.865268637s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.6.2.718063773.exe -p stopped-upgrade-936191 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.6.2.718063773.exe -p stopped-upgrade-936191 stop: (1m32.803986154s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-936191 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p stopped-upgrade-936191 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90 (42.724142522s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-936191] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17740
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17740-63652/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17740-63652/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the kvm2 driver based on existing profile
	* Starting control plane node stopped-upgrade-936191 in cluster stopped-upgrade-936191
	* Restarting existing kvm2 VM for "stopped-upgrade-936191" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 19:40:36.259546  100230 out.go:296] Setting OutFile to fd 1 ...
	I1206 19:40:36.259839  100230 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 19:40:36.259850  100230 out.go:309] Setting ErrFile to fd 2...
	I1206 19:40:36.259858  100230 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 19:40:36.260152  100230 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17740-63652/.minikube/bin
	I1206 19:40:36.261018  100230 out.go:303] Setting JSON to false
	I1206 19:40:36.262480  100230 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":8586,"bootTime":1701883050,"procs":238,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 19:40:36.262594  100230 start.go:138] virtualization: kvm guest
	I1206 19:40:36.291762  100230 out.go:177] * [stopped-upgrade-936191] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1206 19:40:36.354652  100230 out.go:177]   - MINIKUBE_LOCATION=17740
	I1206 19:40:36.354672  100230 notify.go:220] Checking for updates...
	I1206 19:40:36.429562  100230 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 19:40:36.436092  100230 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17740-63652/kubeconfig
	I1206 19:40:36.446706  100230 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17740-63652/.minikube
	I1206 19:40:36.448495  100230 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 19:40:36.450354  100230 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 19:40:36.453377  100230 config.go:182] Loaded profile config "stopped-upgrade-936191": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1206 19:40:36.453406  100230 start_flags.go:694] config upgrade: Driver=kvm2
	I1206 19:40:36.453421  100230 start_flags.go:706] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f
	I1206 19:40:36.453521  100230 profile.go:148] Saving config to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/stopped-upgrade-936191/config.json ...
	I1206 19:40:36.454353  100230 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:40:36.454447  100230 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:40:36.470284  100230 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36457
	I1206 19:40:36.470761  100230 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:40:36.471318  100230 main.go:141] libmachine: Using API Version  1
	I1206 19:40:36.471344  100230 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:40:36.471698  100230 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:40:36.471890  100230 main.go:141] libmachine: (stopped-upgrade-936191) Calling .DriverName
	I1206 19:40:36.592170  100230 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I1206 19:40:36.654208  100230 driver.go:392] Setting default libvirt URI to qemu:///system
	I1206 19:40:36.654835  100230 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:40:36.654917  100230 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:40:36.673086  100230 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33603
	I1206 19:40:36.673757  100230 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:40:36.674381  100230 main.go:141] libmachine: Using API Version  1
	I1206 19:40:36.674435  100230 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:40:36.674917  100230 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:40:36.675272  100230 main.go:141] libmachine: (stopped-upgrade-936191) Calling .DriverName
	I1206 19:40:36.775528  100230 out.go:177] * Using the kvm2 driver based on existing profile
	I1206 19:40:36.792169  100230 start.go:298] selected driver: kvm2
	I1206 19:40:36.792213  100230 start.go:902] validating driver "kvm2" against &{Name:stopped-upgrade-936191 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 Clust
erName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.74 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1206 19:40:36.792450  100230 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 19:40:36.793575  100230 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 19:40:36.793695  100230 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17740-63652/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1206 19:40:36.810903  100230 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1206 19:40:36.811563  100230 cni.go:84] Creating CNI manager for ""
	I1206 19:40:36.811592  100230 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I1206 19:40:36.811620  100230 start_flags.go:323] config:
	{Name:stopped-upgrade-936191 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.74 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1206 19:40:36.811943  100230 iso.go:125] acquiring lock: {Name:mk6e9c7dc90243dab7d2a6f322b4b6abe4dff6ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 19:40:36.859050  100230 out.go:177] * Starting control plane node stopped-upgrade-936191 in cluster stopped-upgrade-936191
	I1206 19:40:36.867738  100230 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime crio
	W1206 19:40:36.907775  100230 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1206 19:40:36.907945  100230 profile.go:148] Saving config to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/stopped-upgrade-936191/config.json ...
	I1206 19:40:36.908070  100230 cache.go:107] acquiring lock: {Name:mk5195646e7dd0f79e637be96dc35fbc12e472e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 19:40:36.908097  100230 cache.go:107] acquiring lock: {Name:mk6e4e65abd6f9df232608024e5f23f46678723c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 19:40:36.908114  100230 cache.go:107] acquiring lock: {Name:mk2c69bb29434615460b788bdb15e044eb4f10b3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 19:40:36.908154  100230 cache.go:107] acquiring lock: {Name:mka5144732d6393a384f4a32452dd1d86fb27d83 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 19:40:36.908176  100230 cache.go:115] /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1206 19:40:36.908181  100230 cache.go:115] /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 exists
	I1206 19:40:36.908190  100230 cache.go:115] /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 exists
	I1206 19:40:36.908193  100230 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 136.421µs
	I1206 19:40:36.908195  100230 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "/home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0" took 108.261µs
	I1206 19:40:36.908207  100230 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1206 19:40:36.908209  100230 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 succeeded
	I1206 19:40:36.908212  100230 cache.go:115] /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I1206 19:40:36.908076  100230 cache.go:107] acquiring lock: {Name:mkebcc4725073a16dd6ef66040e9e24922311f73 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 19:40:36.908092  100230 cache.go:107] acquiring lock: {Name:mk604bb84a1cb9fbf2b55c38820113aec076231a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 19:40:36.908220  100230 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 69.517µs
	I1206 19:40:36.908230  100230 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I1206 19:40:36.908220  100230 cache.go:107] acquiring lock: {Name:mkf8f023ea46a22353d294cf75f7b0593d247c42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 19:40:36.908210  100230 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "/home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0" took 89.028µs
	I1206 19:40:36.908241  100230 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 succeeded
	I1206 19:40:36.908223  100230 cache.go:107] acquiring lock: {Name:mkcffad656ee0d7e4d7e8c338c0442155a3917fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 19:40:36.908253  100230 cache.go:115] /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I1206 19:40:36.908268  100230 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 182.525µs
	I1206 19:40:36.908245  100230 cache.go:115] /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 exists
	I1206 19:40:36.908292  100230 cache.go:115] /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 exists
	I1206 19:40:36.908295  100230 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I1206 19:40:36.908293  100230 cache.go:115] /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 exists
	I1206 19:40:36.908305  100230 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "/home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0" took 245.22µs
	I1206 19:40:36.908323  100230 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 succeeded
	I1206 19:40:36.908304  100230 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "/home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5" took 84.052µs
	I1206 19:40:36.908333  100230 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 succeeded
	I1206 19:40:36.908320  100230 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "/home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0" took 111.991µs
	I1206 19:40:36.908346  100230 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 succeeded
	I1206 19:40:36.908354  100230 cache.go:87] Successfully saved all images to host disk.
	I1206 19:40:36.929808  100230 start.go:365] acquiring machines lock for stopped-upgrade-936191: {Name:mk49ce640266d8c664a871ed4989f65c26b6fae1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1206 19:40:36.929917  100230 start.go:369] acquired machines lock for "stopped-upgrade-936191" in 65.098µs
	I1206 19:40:36.929944  100230 start.go:96] Skipping create...Using existing machine configuration
	I1206 19:40:36.929955  100230 fix.go:54] fixHost starting: minikube
	I1206 19:40:36.930428  100230 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:40:36.930483  100230 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:40:36.946906  100230 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34265
	I1206 19:40:36.947424  100230 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:40:36.947978  100230 main.go:141] libmachine: Using API Version  1
	I1206 19:40:36.948017  100230 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:40:36.948456  100230 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:40:36.948756  100230 main.go:141] libmachine: (stopped-upgrade-936191) Calling .DriverName
	I1206 19:40:36.948943  100230 main.go:141] libmachine: (stopped-upgrade-936191) Calling .GetState
	I1206 19:40:36.950864  100230 fix.go:102] recreateIfNeeded on stopped-upgrade-936191: state=Stopped err=<nil>
	I1206 19:40:36.950902  100230 main.go:141] libmachine: (stopped-upgrade-936191) Calling .DriverName
	W1206 19:40:36.951075  100230 fix.go:128] unexpected machine state, will restart: <nil>
	I1206 19:40:37.004386  100230 out.go:177] * Restarting existing kvm2 VM for "stopped-upgrade-936191" ...
	I1206 19:40:37.023347  100230 main.go:141] libmachine: (stopped-upgrade-936191) Calling .Start
	I1206 19:40:37.023699  100230 main.go:141] libmachine: (stopped-upgrade-936191) Ensuring networks are active...
	I1206 19:40:37.024865  100230 main.go:141] libmachine: (stopped-upgrade-936191) Ensuring network default is active
	I1206 19:40:37.025447  100230 main.go:141] libmachine: (stopped-upgrade-936191) Ensuring network minikube-net is active
	I1206 19:40:37.026107  100230 main.go:141] libmachine: (stopped-upgrade-936191) Getting domain xml...
	I1206 19:40:37.027139  100230 main.go:141] libmachine: (stopped-upgrade-936191) Creating domain...
	I1206 19:40:38.820876  100230 main.go:141] libmachine: (stopped-upgrade-936191) Waiting to get IP...
	I1206 19:40:38.822078  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | domain stopped-upgrade-936191 has defined MAC address 52:54:00:66:05:6d in network minikube-net
	I1206 19:40:38.822724  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | unable to find current IP address of domain stopped-upgrade-936191 in network minikube-net
	I1206 19:40:38.822819  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | I1206 19:40:38.822696  100264 retry.go:31] will retry after 301.964736ms: waiting for machine to come up
	I1206 19:40:39.126544  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | domain stopped-upgrade-936191 has defined MAC address 52:54:00:66:05:6d in network minikube-net
	I1206 19:40:39.127119  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | unable to find current IP address of domain stopped-upgrade-936191 in network minikube-net
	I1206 19:40:39.127155  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | I1206 19:40:39.127059  100264 retry.go:31] will retry after 259.758007ms: waiting for machine to come up
	I1206 19:40:39.388789  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | domain stopped-upgrade-936191 has defined MAC address 52:54:00:66:05:6d in network minikube-net
	I1206 19:40:39.389597  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | unable to find current IP address of domain stopped-upgrade-936191 in network minikube-net
	I1206 19:40:39.389779  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | I1206 19:40:39.389675  100264 retry.go:31] will retry after 445.990211ms: waiting for machine to come up
	I1206 19:40:39.837368  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | domain stopped-upgrade-936191 has defined MAC address 52:54:00:66:05:6d in network minikube-net
	I1206 19:40:39.837989  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | unable to find current IP address of domain stopped-upgrade-936191 in network minikube-net
	I1206 19:40:39.838020  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | I1206 19:40:39.837936  100264 retry.go:31] will retry after 390.496643ms: waiting for machine to come up
	I1206 19:40:40.230616  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | domain stopped-upgrade-936191 has defined MAC address 52:54:00:66:05:6d in network minikube-net
	I1206 19:40:40.231168  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | unable to find current IP address of domain stopped-upgrade-936191 in network minikube-net
	I1206 19:40:40.231201  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | I1206 19:40:40.231111  100264 retry.go:31] will retry after 689.187702ms: waiting for machine to come up
	I1206 19:40:40.921570  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | domain stopped-upgrade-936191 has defined MAC address 52:54:00:66:05:6d in network minikube-net
	I1206 19:40:40.922130  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | unable to find current IP address of domain stopped-upgrade-936191 in network minikube-net
	I1206 19:40:40.922163  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | I1206 19:40:40.922076  100264 retry.go:31] will retry after 615.484939ms: waiting for machine to come up
	I1206 19:40:41.539470  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | domain stopped-upgrade-936191 has defined MAC address 52:54:00:66:05:6d in network minikube-net
	I1206 19:40:41.540006  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | unable to find current IP address of domain stopped-upgrade-936191 in network minikube-net
	I1206 19:40:41.540038  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | I1206 19:40:41.539952  100264 retry.go:31] will retry after 1.156130572s: waiting for machine to come up
	I1206 19:40:42.697389  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | domain stopped-upgrade-936191 has defined MAC address 52:54:00:66:05:6d in network minikube-net
	I1206 19:40:42.697950  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | unable to find current IP address of domain stopped-upgrade-936191 in network minikube-net
	I1206 19:40:42.697981  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | I1206 19:40:42.697893  100264 retry.go:31] will retry after 925.825365ms: waiting for machine to come up
	I1206 19:40:43.625116  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | domain stopped-upgrade-936191 has defined MAC address 52:54:00:66:05:6d in network minikube-net
	I1206 19:40:43.625683  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | unable to find current IP address of domain stopped-upgrade-936191 in network minikube-net
	I1206 19:40:43.625713  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | I1206 19:40:43.625621  100264 retry.go:31] will retry after 1.235648021s: waiting for machine to come up
	I1206 19:40:44.862547  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | domain stopped-upgrade-936191 has defined MAC address 52:54:00:66:05:6d in network minikube-net
	I1206 19:40:44.863082  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | unable to find current IP address of domain stopped-upgrade-936191 in network minikube-net
	I1206 19:40:44.863107  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | I1206 19:40:44.863024  100264 retry.go:31] will retry after 1.610916724s: waiting for machine to come up
	I1206 19:40:46.475320  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | domain stopped-upgrade-936191 has defined MAC address 52:54:00:66:05:6d in network minikube-net
	I1206 19:40:46.475987  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | unable to find current IP address of domain stopped-upgrade-936191 in network minikube-net
	I1206 19:40:46.476024  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | I1206 19:40:46.475927  100264 retry.go:31] will retry after 2.396111489s: waiting for machine to come up
	I1206 19:40:48.874612  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | domain stopped-upgrade-936191 has defined MAC address 52:54:00:66:05:6d in network minikube-net
	I1206 19:40:48.875122  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | unable to find current IP address of domain stopped-upgrade-936191 in network minikube-net
	I1206 19:40:48.875151  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | I1206 19:40:48.875077  100264 retry.go:31] will retry after 2.41153021s: waiting for machine to come up
	I1206 19:40:51.289577  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | domain stopped-upgrade-936191 has defined MAC address 52:54:00:66:05:6d in network minikube-net
	I1206 19:40:51.290074  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | unable to find current IP address of domain stopped-upgrade-936191 in network minikube-net
	I1206 19:40:51.290098  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | I1206 19:40:51.290020  100264 retry.go:31] will retry after 3.539845823s: waiting for machine to come up
	I1206 19:40:54.833551  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | domain stopped-upgrade-936191 has defined MAC address 52:54:00:66:05:6d in network minikube-net
	I1206 19:40:54.834049  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | unable to find current IP address of domain stopped-upgrade-936191 in network minikube-net
	I1206 19:40:54.834076  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | I1206 19:40:54.834013  100264 retry.go:31] will retry after 4.076403084s: waiting for machine to come up
	I1206 19:40:58.913127  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | domain stopped-upgrade-936191 has defined MAC address 52:54:00:66:05:6d in network minikube-net
	I1206 19:40:58.913691  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | unable to find current IP address of domain stopped-upgrade-936191 in network minikube-net
	I1206 19:40:58.913723  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | I1206 19:40:58.913630  100264 retry.go:31] will retry after 5.12019845s: waiting for machine to come up
	I1206 19:41:04.035577  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | domain stopped-upgrade-936191 has defined MAC address 52:54:00:66:05:6d in network minikube-net
	I1206 19:41:04.036273  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | unable to find current IP address of domain stopped-upgrade-936191 in network minikube-net
	I1206 19:41:04.036298  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | I1206 19:41:04.036225  100264 retry.go:31] will retry after 5.551903271s: waiting for machine to come up
	I1206 19:41:09.590411  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | domain stopped-upgrade-936191 has defined MAC address 52:54:00:66:05:6d in network minikube-net
	I1206 19:41:09.590901  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | unable to find current IP address of domain stopped-upgrade-936191 in network minikube-net
	I1206 19:41:09.590929  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | I1206 19:41:09.590841  100264 retry.go:31] will retry after 6.918515149s: waiting for machine to come up
	I1206 19:41:16.510552  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | domain stopped-upgrade-936191 has defined MAC address 52:54:00:66:05:6d in network minikube-net
	I1206 19:41:16.511136  100230 main.go:141] libmachine: (stopped-upgrade-936191) Found IP for machine: 192.168.50.74
	I1206 19:41:16.511168  100230 main.go:141] libmachine: (stopped-upgrade-936191) Reserving static IP address...
	I1206 19:41:16.511184  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | domain stopped-upgrade-936191 has current primary IP address 192.168.50.74 and MAC address 52:54:00:66:05:6d in network minikube-net
	I1206 19:41:16.511686  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | found host DHCP lease matching {name: "stopped-upgrade-936191", mac: "52:54:00:66:05:6d", ip: "192.168.50.74"} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-06 20:41:04 +0000 UTC Type:0 Mac:52:54:00:66:05:6d Iaid: IPaddr:192.168.50.74 Prefix:24 Hostname:stopped-upgrade-936191 Clientid:01:52:54:00:66:05:6d}
	I1206 19:41:16.511714  100230 main.go:141] libmachine: (stopped-upgrade-936191) Reserved static IP address: 192.168.50.74
	I1206 19:41:16.511739  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | skip adding static IP to network minikube-net - found existing host DHCP lease matching {name: "stopped-upgrade-936191", mac: "52:54:00:66:05:6d", ip: "192.168.50.74"}
	I1206 19:41:16.511753  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | Getting to WaitForSSH function...
	I1206 19:41:16.511766  100230 main.go:141] libmachine: (stopped-upgrade-936191) Waiting for SSH to be available...
	I1206 19:41:16.514252  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | domain stopped-upgrade-936191 has defined MAC address 52:54:00:66:05:6d in network minikube-net
	I1206 19:41:16.514592  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:05:6d", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-06 20:41:04 +0000 UTC Type:0 Mac:52:54:00:66:05:6d Iaid: IPaddr:192.168.50.74 Prefix:24 Hostname:stopped-upgrade-936191 Clientid:01:52:54:00:66:05:6d}
	I1206 19:41:16.514620  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | domain stopped-upgrade-936191 has defined IP address 192.168.50.74 and MAC address 52:54:00:66:05:6d in network minikube-net
	I1206 19:41:16.514762  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | Using SSH client type: external
	I1206 19:41:16.514793  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | Using SSH private key: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/stopped-upgrade-936191/id_rsa (-rw-------)
	I1206 19:41:16.514832  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.74 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17740-63652/.minikube/machines/stopped-upgrade-936191/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1206 19:41:16.514858  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | About to run SSH command:
	I1206 19:41:16.514896  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | exit 0
	I1206 19:41:16.648747  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | SSH cmd err, output: <nil>: 
	I1206 19:41:16.649120  100230 main.go:141] libmachine: (stopped-upgrade-936191) Calling .GetConfigRaw
	I1206 19:41:16.649791  100230 main.go:141] libmachine: (stopped-upgrade-936191) Calling .GetIP
	I1206 19:41:16.652506  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | domain stopped-upgrade-936191 has defined MAC address 52:54:00:66:05:6d in network minikube-net
	I1206 19:41:16.652934  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:05:6d", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-06 20:41:04 +0000 UTC Type:0 Mac:52:54:00:66:05:6d Iaid: IPaddr:192.168.50.74 Prefix:24 Hostname:stopped-upgrade-936191 Clientid:01:52:54:00:66:05:6d}
	I1206 19:41:16.652958  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | domain stopped-upgrade-936191 has defined IP address 192.168.50.74 and MAC address 52:54:00:66:05:6d in network minikube-net
	I1206 19:41:16.653208  100230 profile.go:148] Saving config to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/stopped-upgrade-936191/config.json ...
	I1206 19:41:16.653434  100230 machine.go:88] provisioning docker machine ...
	I1206 19:41:16.653454  100230 main.go:141] libmachine: (stopped-upgrade-936191) Calling .DriverName
	I1206 19:41:16.653641  100230 main.go:141] libmachine: (stopped-upgrade-936191) Calling .GetMachineName
	I1206 19:41:16.653791  100230 buildroot.go:166] provisioning hostname "stopped-upgrade-936191"
	I1206 19:41:16.653815  100230 main.go:141] libmachine: (stopped-upgrade-936191) Calling .GetMachineName
	I1206 19:41:16.654004  100230 main.go:141] libmachine: (stopped-upgrade-936191) Calling .GetSSHHostname
	I1206 19:41:16.656579  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | domain stopped-upgrade-936191 has defined MAC address 52:54:00:66:05:6d in network minikube-net
	I1206 19:41:16.657054  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:05:6d", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-06 20:41:04 +0000 UTC Type:0 Mac:52:54:00:66:05:6d Iaid: IPaddr:192.168.50.74 Prefix:24 Hostname:stopped-upgrade-936191 Clientid:01:52:54:00:66:05:6d}
	I1206 19:41:16.657083  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | domain stopped-upgrade-936191 has defined IP address 192.168.50.74 and MAC address 52:54:00:66:05:6d in network minikube-net
	I1206 19:41:16.657250  100230 main.go:141] libmachine: (stopped-upgrade-936191) Calling .GetSSHPort
	I1206 19:41:16.657433  100230 main.go:141] libmachine: (stopped-upgrade-936191) Calling .GetSSHKeyPath
	I1206 19:41:16.657615  100230 main.go:141] libmachine: (stopped-upgrade-936191) Calling .GetSSHKeyPath
	I1206 19:41:16.657772  100230 main.go:141] libmachine: (stopped-upgrade-936191) Calling .GetSSHUsername
	I1206 19:41:16.657983  100230 main.go:141] libmachine: Using SSH client type: native
	I1206 19:41:16.658385  100230 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.74 22 <nil> <nil>}
	I1206 19:41:16.658400  100230 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-936191 && echo "stopped-upgrade-936191" | sudo tee /etc/hostname
	I1206 19:41:16.780285  100230 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-936191
	
	I1206 19:41:16.780319  100230 main.go:141] libmachine: (stopped-upgrade-936191) Calling .GetSSHHostname
	I1206 19:41:16.783196  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | domain stopped-upgrade-936191 has defined MAC address 52:54:00:66:05:6d in network minikube-net
	I1206 19:41:16.783614  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:05:6d", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-06 20:41:04 +0000 UTC Type:0 Mac:52:54:00:66:05:6d Iaid: IPaddr:192.168.50.74 Prefix:24 Hostname:stopped-upgrade-936191 Clientid:01:52:54:00:66:05:6d}
	I1206 19:41:16.783644  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | domain stopped-upgrade-936191 has defined IP address 192.168.50.74 and MAC address 52:54:00:66:05:6d in network minikube-net
	I1206 19:41:16.783839  100230 main.go:141] libmachine: (stopped-upgrade-936191) Calling .GetSSHPort
	I1206 19:41:16.784061  100230 main.go:141] libmachine: (stopped-upgrade-936191) Calling .GetSSHKeyPath
	I1206 19:41:16.784212  100230 main.go:141] libmachine: (stopped-upgrade-936191) Calling .GetSSHKeyPath
	I1206 19:41:16.784314  100230 main.go:141] libmachine: (stopped-upgrade-936191) Calling .GetSSHUsername
	I1206 19:41:16.784479  100230 main.go:141] libmachine: Using SSH client type: native
	I1206 19:41:16.784819  100230 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.74 22 <nil> <nil>}
	I1206 19:41:16.784847  100230 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-936191' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-936191/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-936191' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 19:41:16.905534  100230 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1206 19:41:16.905569  100230 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17740-63652/.minikube CaCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17740-63652/.minikube}
	I1206 19:41:16.905602  100230 buildroot.go:174] setting up certificates
	I1206 19:41:16.905616  100230 provision.go:83] configureAuth start
	I1206 19:41:16.905634  100230 main.go:141] libmachine: (stopped-upgrade-936191) Calling .GetMachineName
	I1206 19:41:16.905910  100230 main.go:141] libmachine: (stopped-upgrade-936191) Calling .GetIP
	I1206 19:41:16.908570  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | domain stopped-upgrade-936191 has defined MAC address 52:54:00:66:05:6d in network minikube-net
	I1206 19:41:16.909014  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:05:6d", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-06 20:41:04 +0000 UTC Type:0 Mac:52:54:00:66:05:6d Iaid: IPaddr:192.168.50.74 Prefix:24 Hostname:stopped-upgrade-936191 Clientid:01:52:54:00:66:05:6d}
	I1206 19:41:16.909048  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | domain stopped-upgrade-936191 has defined IP address 192.168.50.74 and MAC address 52:54:00:66:05:6d in network minikube-net
	I1206 19:41:16.909142  100230 main.go:141] libmachine: (stopped-upgrade-936191) Calling .GetSSHHostname
	I1206 19:41:16.911659  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | domain stopped-upgrade-936191 has defined MAC address 52:54:00:66:05:6d in network minikube-net
	I1206 19:41:16.912048  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:05:6d", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-06 20:41:04 +0000 UTC Type:0 Mac:52:54:00:66:05:6d Iaid: IPaddr:192.168.50.74 Prefix:24 Hostname:stopped-upgrade-936191 Clientid:01:52:54:00:66:05:6d}
	I1206 19:41:16.912075  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | domain stopped-upgrade-936191 has defined IP address 192.168.50.74 and MAC address 52:54:00:66:05:6d in network minikube-net
	I1206 19:41:16.912172  100230 provision.go:138] copyHostCerts
	I1206 19:41:16.912246  100230 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem, removing ...
	I1206 19:41:16.912271  100230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem
	I1206 19:41:16.912363  100230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem (1082 bytes)
	I1206 19:41:16.912478  100230 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem, removing ...
	I1206 19:41:16.912489  100230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem
	I1206 19:41:16.912526  100230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem (1123 bytes)
	I1206 19:41:16.912597  100230 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem, removing ...
	I1206 19:41:16.912606  100230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem
	I1206 19:41:16.912640  100230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem (1679 bytes)
	I1206 19:41:16.912701  100230 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-936191 san=[192.168.50.74 192.168.50.74 localhost 127.0.0.1 minikube stopped-upgrade-936191]
	I1206 19:41:17.015479  100230 provision.go:172] copyRemoteCerts
	I1206 19:41:17.015547  100230 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 19:41:17.015580  100230 main.go:141] libmachine: (stopped-upgrade-936191) Calling .GetSSHHostname
	I1206 19:41:17.018449  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | domain stopped-upgrade-936191 has defined MAC address 52:54:00:66:05:6d in network minikube-net
	I1206 19:41:17.018777  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:05:6d", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-06 20:41:04 +0000 UTC Type:0 Mac:52:54:00:66:05:6d Iaid: IPaddr:192.168.50.74 Prefix:24 Hostname:stopped-upgrade-936191 Clientid:01:52:54:00:66:05:6d}
	I1206 19:41:17.018800  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | domain stopped-upgrade-936191 has defined IP address 192.168.50.74 and MAC address 52:54:00:66:05:6d in network minikube-net
	I1206 19:41:17.019025  100230 main.go:141] libmachine: (stopped-upgrade-936191) Calling .GetSSHPort
	I1206 19:41:17.019227  100230 main.go:141] libmachine: (stopped-upgrade-936191) Calling .GetSSHKeyPath
	I1206 19:41:17.019412  100230 main.go:141] libmachine: (stopped-upgrade-936191) Calling .GetSSHUsername
	I1206 19:41:17.019548  100230 sshutil.go:53] new ssh client: &{IP:192.168.50.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/stopped-upgrade-936191/id_rsa Username:docker}
	I1206 19:41:17.104042  100230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 19:41:17.118072  100230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1206 19:41:17.132667  100230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 19:41:17.146049  100230 provision.go:86] duration metric: configureAuth took 240.414656ms
	I1206 19:41:17.146097  100230 buildroot.go:189] setting minikube options for container-runtime
	I1206 19:41:17.146275  100230 config.go:182] Loaded profile config "stopped-upgrade-936191": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1206 19:41:17.146368  100230 main.go:141] libmachine: (stopped-upgrade-936191) Calling .GetSSHHostname
	I1206 19:41:17.148871  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | domain stopped-upgrade-936191 has defined MAC address 52:54:00:66:05:6d in network minikube-net
	I1206 19:41:17.149366  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:05:6d", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-06 20:41:04 +0000 UTC Type:0 Mac:52:54:00:66:05:6d Iaid: IPaddr:192.168.50.74 Prefix:24 Hostname:stopped-upgrade-936191 Clientid:01:52:54:00:66:05:6d}
	I1206 19:41:17.149399  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | domain stopped-upgrade-936191 has defined IP address 192.168.50.74 and MAC address 52:54:00:66:05:6d in network minikube-net
	I1206 19:41:17.149538  100230 main.go:141] libmachine: (stopped-upgrade-936191) Calling .GetSSHPort
	I1206 19:41:17.149754  100230 main.go:141] libmachine: (stopped-upgrade-936191) Calling .GetSSHKeyPath
	I1206 19:41:17.149947  100230 main.go:141] libmachine: (stopped-upgrade-936191) Calling .GetSSHKeyPath
	I1206 19:41:17.150088  100230 main.go:141] libmachine: (stopped-upgrade-936191) Calling .GetSSHUsername
	I1206 19:41:17.150251  100230 main.go:141] libmachine: Using SSH client type: native
	I1206 19:41:17.150576  100230 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.74 22 <nil> <nil>}
	I1206 19:41:17.150599  100230 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 19:41:18.097472  100230 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 19:41:18.097504  100230 machine.go:91] provisioned docker machine in 1.444054895s
	I1206 19:41:18.097518  100230 start.go:300] post-start starting for "stopped-upgrade-936191" (driver="kvm2")
	I1206 19:41:18.097531  100230 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 19:41:18.097553  100230 main.go:141] libmachine: (stopped-upgrade-936191) Calling .DriverName
	I1206 19:41:18.097890  100230 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 19:41:18.097920  100230 main.go:141] libmachine: (stopped-upgrade-936191) Calling .GetSSHHostname
	I1206 19:41:18.100726  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | domain stopped-upgrade-936191 has defined MAC address 52:54:00:66:05:6d in network minikube-net
	I1206 19:41:18.101178  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:05:6d", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-06 20:41:04 +0000 UTC Type:0 Mac:52:54:00:66:05:6d Iaid: IPaddr:192.168.50.74 Prefix:24 Hostname:stopped-upgrade-936191 Clientid:01:52:54:00:66:05:6d}
	I1206 19:41:18.101204  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | domain stopped-upgrade-936191 has defined IP address 192.168.50.74 and MAC address 52:54:00:66:05:6d in network minikube-net
	I1206 19:41:18.101449  100230 main.go:141] libmachine: (stopped-upgrade-936191) Calling .GetSSHPort
	I1206 19:41:18.101702  100230 main.go:141] libmachine: (stopped-upgrade-936191) Calling .GetSSHKeyPath
	I1206 19:41:18.101898  100230 main.go:141] libmachine: (stopped-upgrade-936191) Calling .GetSSHUsername
	I1206 19:41:18.102090  100230 sshutil.go:53] new ssh client: &{IP:192.168.50.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/stopped-upgrade-936191/id_rsa Username:docker}
	I1206 19:41:18.192252  100230 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 19:41:18.196213  100230 info.go:137] Remote host: Buildroot 2019.02.7
	I1206 19:41:18.196243  100230 filesync.go:126] Scanning /home/jenkins/minikube-integration/17740-63652/.minikube/addons for local assets ...
	I1206 19:41:18.196328  100230 filesync.go:126] Scanning /home/jenkins/minikube-integration/17740-63652/.minikube/files for local assets ...
	I1206 19:41:18.196439  100230 filesync.go:149] local asset: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem -> 708342.pem in /etc/ssl/certs
	I1206 19:41:18.196528  100230 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 19:41:18.202893  100230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem --> /etc/ssl/certs/708342.pem (1708 bytes)
	I1206 19:41:18.216895  100230 start.go:303] post-start completed in 119.362935ms
	I1206 19:41:18.216925  100230 fix.go:56] fixHost completed within 41.286971308s
	I1206 19:41:18.216946  100230 main.go:141] libmachine: (stopped-upgrade-936191) Calling .GetSSHHostname
	I1206 19:41:18.219657  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | domain stopped-upgrade-936191 has defined MAC address 52:54:00:66:05:6d in network minikube-net
	I1206 19:41:18.220036  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:05:6d", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-06 20:41:04 +0000 UTC Type:0 Mac:52:54:00:66:05:6d Iaid: IPaddr:192.168.50.74 Prefix:24 Hostname:stopped-upgrade-936191 Clientid:01:52:54:00:66:05:6d}
	I1206 19:41:18.220067  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | domain stopped-upgrade-936191 has defined IP address 192.168.50.74 and MAC address 52:54:00:66:05:6d in network minikube-net
	I1206 19:41:18.220195  100230 main.go:141] libmachine: (stopped-upgrade-936191) Calling .GetSSHPort
	I1206 19:41:18.220405  100230 main.go:141] libmachine: (stopped-upgrade-936191) Calling .GetSSHKeyPath
	I1206 19:41:18.220580  100230 main.go:141] libmachine: (stopped-upgrade-936191) Calling .GetSSHKeyPath
	I1206 19:41:18.220770  100230 main.go:141] libmachine: (stopped-upgrade-936191) Calling .GetSSHUsername
	I1206 19:41:18.220919  100230 main.go:141] libmachine: Using SSH client type: native
	I1206 19:41:18.221269  100230 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.74 22 <nil> <nil>}
	I1206 19:41:18.221282  100230 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1206 19:41:18.334152  100230 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701891678.278172074
	
	I1206 19:41:18.334174  100230 fix.go:206] guest clock: 1701891678.278172074
	I1206 19:41:18.334191  100230 fix.go:219] Guest: 2023-12-06 19:41:18.278172074 +0000 UTC Remote: 2023-12-06 19:41:18.21692863 +0000 UTC m=+42.022640232 (delta=61.243444ms)
	I1206 19:41:18.334237  100230 fix.go:190] guest clock delta is within tolerance: 61.243444ms
	I1206 19:41:18.334247  100230 start.go:83] releasing machines lock for "stopped-upgrade-936191", held for 41.404313839s
	I1206 19:41:18.334284  100230 main.go:141] libmachine: (stopped-upgrade-936191) Calling .DriverName
	I1206 19:41:18.334601  100230 main.go:141] libmachine: (stopped-upgrade-936191) Calling .GetIP
	I1206 19:41:18.337411  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | domain stopped-upgrade-936191 has defined MAC address 52:54:00:66:05:6d in network minikube-net
	I1206 19:41:18.337752  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:05:6d", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-06 20:41:04 +0000 UTC Type:0 Mac:52:54:00:66:05:6d Iaid: IPaddr:192.168.50.74 Prefix:24 Hostname:stopped-upgrade-936191 Clientid:01:52:54:00:66:05:6d}
	I1206 19:41:18.337782  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | domain stopped-upgrade-936191 has defined IP address 192.168.50.74 and MAC address 52:54:00:66:05:6d in network minikube-net
	I1206 19:41:18.337948  100230 main.go:141] libmachine: (stopped-upgrade-936191) Calling .DriverName
	I1206 19:41:18.338520  100230 main.go:141] libmachine: (stopped-upgrade-936191) Calling .DriverName
	I1206 19:41:18.338685  100230 main.go:141] libmachine: (stopped-upgrade-936191) Calling .DriverName
	I1206 19:41:18.338798  100230 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 19:41:18.338845  100230 main.go:141] libmachine: (stopped-upgrade-936191) Calling .GetSSHHostname
	I1206 19:41:18.338973  100230 ssh_runner.go:195] Run: cat /version.json
	I1206 19:41:18.339001  100230 main.go:141] libmachine: (stopped-upgrade-936191) Calling .GetSSHHostname
	I1206 19:41:18.341619  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | domain stopped-upgrade-936191 has defined MAC address 52:54:00:66:05:6d in network minikube-net
	I1206 19:41:18.341950  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | domain stopped-upgrade-936191 has defined MAC address 52:54:00:66:05:6d in network minikube-net
	I1206 19:41:18.342047  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:05:6d", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-06 20:41:04 +0000 UTC Type:0 Mac:52:54:00:66:05:6d Iaid: IPaddr:192.168.50.74 Prefix:24 Hostname:stopped-upgrade-936191 Clientid:01:52:54:00:66:05:6d}
	I1206 19:41:18.342075  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | domain stopped-upgrade-936191 has defined IP address 192.168.50.74 and MAC address 52:54:00:66:05:6d in network minikube-net
	I1206 19:41:18.342200  100230 main.go:141] libmachine: (stopped-upgrade-936191) Calling .GetSSHPort
	I1206 19:41:18.342392  100230 main.go:141] libmachine: (stopped-upgrade-936191) Calling .GetSSHKeyPath
	I1206 19:41:18.342422  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:05:6d", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-06 20:41:04 +0000 UTC Type:0 Mac:52:54:00:66:05:6d Iaid: IPaddr:192.168.50.74 Prefix:24 Hostname:stopped-upgrade-936191 Clientid:01:52:54:00:66:05:6d}
	I1206 19:41:18.342452  100230 main.go:141] libmachine: (stopped-upgrade-936191) DBG | domain stopped-upgrade-936191 has defined IP address 192.168.50.74 and MAC address 52:54:00:66:05:6d in network minikube-net
	I1206 19:41:18.342608  100230 main.go:141] libmachine: (stopped-upgrade-936191) Calling .GetSSHUsername
	I1206 19:41:18.342610  100230 main.go:141] libmachine: (stopped-upgrade-936191) Calling .GetSSHPort
	I1206 19:41:18.342778  100230 sshutil.go:53] new ssh client: &{IP:192.168.50.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/stopped-upgrade-936191/id_rsa Username:docker}
	I1206 19:41:18.342937  100230 main.go:141] libmachine: (stopped-upgrade-936191) Calling .GetSSHKeyPath
	I1206 19:41:18.343089  100230 main.go:141] libmachine: (stopped-upgrade-936191) Calling .GetSSHUsername
	I1206 19:41:18.343224  100230 sshutil.go:53] new ssh client: &{IP:192.168.50.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/stopped-upgrade-936191/id_rsa Username:docker}
	W1206 19:41:18.459133  100230 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1206 19:41:18.459207  100230 ssh_runner.go:195] Run: systemctl --version
	I1206 19:41:18.463997  100230 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 19:41:18.511141  100230 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 19:41:18.517807  100230 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 19:41:18.517863  100230 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 19:41:18.523709  100230 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 19:41:18.523738  100230 start.go:475] detecting cgroup driver to use...
	I1206 19:41:18.523820  100230 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 19:41:18.534428  100230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 19:41:18.543473  100230 docker.go:203] disabling cri-docker service (if available) ...
	I1206 19:41:18.543523  100230 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 19:41:18.551530  100230 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 19:41:18.561492  100230 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1206 19:41:18.569586  100230 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1206 19:41:18.569659  100230 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 19:41:18.663808  100230 docker.go:219] disabling docker service ...
	I1206 19:41:18.663873  100230 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 19:41:18.677497  100230 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 19:41:18.685795  100230 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 19:41:18.775762  100230 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 19:41:18.872822  100230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 19:41:18.882476  100230 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 19:41:18.895151  100230 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1206 19:41:18.895226  100230 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:41:18.904551  100230 out.go:177] 
	W1206 19:41:18.906215  100230 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1206 19:41:18.906241  100230 out.go:239] * 
	* 
	W1206 19:41:18.907151  100230 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 19:41:18.908864  100230 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.6.2 to HEAD failed: out/minikube-linux-amd64 start -p stopped-upgrade-936191 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (269.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.78s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-989559 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-989559 --alsologtostderr -v=3: exit status 82 (2m1.317347672s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-989559"  ...
	* Stopping node "no-preload-989559"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 19:47:33.810576  114142 out.go:296] Setting OutFile to fd 1 ...
	I1206 19:47:33.810849  114142 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 19:47:33.810854  114142 out.go:309] Setting ErrFile to fd 2...
	I1206 19:47:33.810860  114142 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 19:47:33.811099  114142 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17740-63652/.minikube/bin
	I1206 19:47:33.811385  114142 out.go:303] Setting JSON to false
	I1206 19:47:33.811473  114142 mustload.go:65] Loading cluster: no-preload-989559
	I1206 19:47:33.811873  114142 config.go:182] Loaded profile config "no-preload-989559": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.1
	I1206 19:47:33.811950  114142 profile.go:148] Saving config to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/no-preload-989559/config.json ...
	I1206 19:47:33.812132  114142 mustload.go:65] Loading cluster: no-preload-989559
	I1206 19:47:33.812244  114142 config.go:182] Loaded profile config "no-preload-989559": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.1
	I1206 19:47:33.812276  114142 stop.go:39] StopHost: no-preload-989559
	I1206 19:47:33.812763  114142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:47:33.812819  114142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:47:33.833624  114142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43169
	I1206 19:47:33.834267  114142 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:47:33.835125  114142 main.go:141] libmachine: Using API Version  1
	I1206 19:47:33.835154  114142 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:47:33.835731  114142 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:47:33.838090  114142 out.go:177] * Stopping node "no-preload-989559"  ...
	I1206 19:47:33.844157  114142 main.go:141] libmachine: Stopping "no-preload-989559"...
	I1206 19:47:33.844190  114142 main.go:141] libmachine: (no-preload-989559) Calling .GetState
	I1206 19:47:33.846556  114142 main.go:141] libmachine: (no-preload-989559) Calling .Stop
	I1206 19:47:33.853305  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 0/60
	I1206 19:47:34.852982  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 1/60
	I1206 19:47:35.854733  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 2/60
	I1206 19:47:36.856422  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 3/60
	I1206 19:47:37.857934  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 4/60
	I1206 19:47:38.860268  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 5/60
	I1206 19:47:39.861781  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 6/60
	I1206 19:47:40.863317  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 7/60
	I1206 19:47:41.864688  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 8/60
	I1206 19:47:42.866209  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 9/60
	I1206 19:47:43.868270  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 10/60
	I1206 19:47:44.869769  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 11/60
	I1206 19:47:45.872272  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 12/60
	I1206 19:47:46.873505  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 13/60
	I1206 19:47:47.875870  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 14/60
	I1206 19:47:48.877792  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 15/60
	I1206 19:47:49.879802  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 16/60
	I1206 19:47:50.881190  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 17/60
	I1206 19:47:51.882701  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 18/60
	I1206 19:47:52.883996  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 19/60
	I1206 19:47:53.886322  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 20/60
	I1206 19:47:54.887694  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 21/60
	I1206 19:47:55.888941  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 22/60
	I1206 19:47:56.890441  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 23/60
	I1206 19:47:57.891757  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 24/60
	I1206 19:47:58.893912  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 25/60
	I1206 19:47:59.896120  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 26/60
	I1206 19:48:00.897584  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 27/60
	I1206 19:48:01.900055  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 28/60
	I1206 19:48:02.901881  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 29/60
	I1206 19:48:03.903290  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 30/60
	I1206 19:48:04.904753  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 31/60
	I1206 19:48:05.906206  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 32/60
	I1206 19:48:06.907663  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 33/60
	I1206 19:48:07.909037  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 34/60
	I1206 19:48:08.911043  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 35/60
	I1206 19:48:09.912240  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 36/60
	I1206 19:48:10.913768  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 37/60
	I1206 19:48:11.915316  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 38/60
	I1206 19:48:12.917136  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 39/60
	I1206 19:48:13.919223  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 40/60
	I1206 19:48:14.920448  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 41/60
	I1206 19:48:15.922015  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 42/60
	I1206 19:48:16.923356  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 43/60
	I1206 19:48:17.924852  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 44/60
	I1206 19:48:18.926763  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 45/60
	I1206 19:48:19.928327  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 46/60
	I1206 19:48:20.929755  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 47/60
	I1206 19:48:21.931159  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 48/60
	I1206 19:48:22.932465  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 49/60
	I1206 19:48:23.934531  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 50/60
	I1206 19:48:24.935838  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 51/60
	I1206 19:48:25.937107  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 52/60
	I1206 19:48:26.938395  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 53/60
	I1206 19:48:27.939713  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 54/60
	I1206 19:48:28.941600  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 55/60
	I1206 19:48:29.943082  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 56/60
	I1206 19:48:30.944626  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 57/60
	I1206 19:48:31.946177  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 58/60
	I1206 19:48:32.947687  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 59/60
	I1206 19:48:33.948944  114142 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1206 19:48:33.949035  114142 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1206 19:48:33.949067  114142 retry.go:31] will retry after 979.488527ms: Temporary Error: stop: unable to stop vm, current state "Running"
	I1206 19:48:34.929197  114142 stop.go:39] StopHost: no-preload-989559
	I1206 19:48:34.929595  114142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:48:34.929684  114142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:48:34.944023  114142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33465
	I1206 19:48:34.944497  114142 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:48:34.945033  114142 main.go:141] libmachine: Using API Version  1
	I1206 19:48:34.945064  114142 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:48:34.945377  114142 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:48:34.947440  114142 out.go:177] * Stopping node "no-preload-989559"  ...
	I1206 19:48:34.948793  114142 main.go:141] libmachine: Stopping "no-preload-989559"...
	I1206 19:48:34.948807  114142 main.go:141] libmachine: (no-preload-989559) Calling .GetState
	I1206 19:48:34.950427  114142 main.go:141] libmachine: (no-preload-989559) Calling .Stop
	I1206 19:48:34.953434  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 0/60
	I1206 19:48:35.954706  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 1/60
	I1206 19:48:36.956029  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 2/60
	I1206 19:48:37.957390  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 3/60
	I1206 19:48:38.958782  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 4/60
	I1206 19:48:39.960498  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 5/60
	I1206 19:48:40.961957  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 6/60
	I1206 19:48:41.963219  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 7/60
	I1206 19:48:42.964482  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 8/60
	I1206 19:48:43.965908  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 9/60
	I1206 19:48:44.967982  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 10/60
	I1206 19:48:45.969302  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 11/60
	I1206 19:48:46.971046  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 12/60
	I1206 19:48:47.972359  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 13/60
	I1206 19:48:48.973708  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 14/60
	I1206 19:48:49.975420  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 15/60
	I1206 19:48:50.976742  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 16/60
	I1206 19:48:51.978139  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 17/60
	I1206 19:48:52.979736  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 18/60
	I1206 19:48:53.980929  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 19/60
	I1206 19:48:54.982504  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 20/60
	I1206 19:48:55.983881  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 21/60
	I1206 19:48:56.985114  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 22/60
	I1206 19:48:57.986566  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 23/60
	I1206 19:48:58.987779  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 24/60
	I1206 19:48:59.989491  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 25/60
	I1206 19:49:00.990780  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 26/60
	I1206 19:49:01.992204  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 27/60
	I1206 19:49:02.993789  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 28/60
	I1206 19:49:03.995158  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 29/60
	I1206 19:49:04.996889  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 30/60
	I1206 19:49:05.998301  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 31/60
	I1206 19:49:06.999701  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 32/60
	I1206 19:49:08.001081  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 33/60
	I1206 19:49:09.002380  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 34/60
	I1206 19:49:10.004313  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 35/60
	I1206 19:49:11.005544  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 36/60
	I1206 19:49:12.006981  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 37/60
	I1206 19:49:13.008193  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 38/60
	I1206 19:49:14.009609  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 39/60
	I1206 19:49:15.011066  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 40/60
	I1206 19:49:16.012563  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 41/60
	I1206 19:49:17.014116  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 42/60
	I1206 19:49:18.015675  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 43/60
	I1206 19:49:19.017155  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 44/60
	I1206 19:49:20.019154  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 45/60
	I1206 19:49:21.020761  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 46/60
	I1206 19:49:22.022462  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 47/60
	I1206 19:49:23.023909  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 48/60
	I1206 19:49:24.026331  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 49/60
	I1206 19:49:25.028269  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 50/60
	I1206 19:49:26.029978  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 51/60
	I1206 19:49:27.031389  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 52/60
	I1206 19:49:28.032823  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 53/60
	I1206 19:49:29.034300  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 54/60
	I1206 19:49:30.035905  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 55/60
	I1206 19:49:31.037415  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 56/60
	I1206 19:49:32.038819  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 57/60
	I1206 19:49:33.039968  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 58/60
	I1206 19:49:34.041339  114142 main.go:141] libmachine: (no-preload-989559) Waiting for machine to stop 59/60
	I1206 19:49:35.042176  114142 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1206 19:49:35.042223  114142 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1206 19:49:35.044140  114142 out.go:177] 
	W1206 19:49:35.045504  114142 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1206 19:49:35.045523  114142 out.go:239] * 
	* 
	W1206 19:49:35.048789  114142 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 19:49:35.050584  114142 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-989559 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-989559 -n no-preload-989559
E1206 19:49:35.646854   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/custom-flannel-459609/client.crt: no such file or directory
E1206 19:49:36.927441   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/custom-flannel-459609/client.crt: no such file or directory
E1206 19:49:39.487894   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/custom-flannel-459609/client.crt: no such file or directory
E1206 19:49:39.757340   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/calico-459609/client.crt: no such file or directory
E1206 19:49:44.608937   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/custom-flannel-459609/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-989559 -n no-preload-989559: exit status 3 (18.465697422s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 19:49:53.517567  114841 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.5:22: connect: no route to host
	E1206 19:49:53.517590  114841 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.5:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-989559" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.78s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (140.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-448851 --alsologtostderr -v=3
E1206 19:47:54.631734   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/functional-317483/client.crt: no such file or directory
E1206 19:48:02.203907   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/kindnet-459609/client.crt: no such file or directory
E1206 19:48:02.209208   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/kindnet-459609/client.crt: no such file or directory
E1206 19:48:02.219535   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/kindnet-459609/client.crt: no such file or directory
E1206 19:48:02.240445   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/kindnet-459609/client.crt: no such file or directory
E1206 19:48:02.280796   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/kindnet-459609/client.crt: no such file or directory
E1206 19:48:02.361204   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/kindnet-459609/client.crt: no such file or directory
E1206 19:48:02.521915   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/kindnet-459609/client.crt: no such file or directory
E1206 19:48:02.843056   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/kindnet-459609/client.crt: no such file or directory
E1206 19:48:03.483266   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/kindnet-459609/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p old-k8s-version-448851 --alsologtostderr -v=3: exit status 82 (2m1.463641402s)

                                                
                                                
-- stdout --
	* Stopping node "old-k8s-version-448851"  ...
	* Stopping node "old-k8s-version-448851"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 19:47:45.572833  114270 out.go:296] Setting OutFile to fd 1 ...
	I1206 19:47:45.572944  114270 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 19:47:45.572954  114270 out.go:309] Setting ErrFile to fd 2...
	I1206 19:47:45.572958  114270 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 19:47:45.573154  114270 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17740-63652/.minikube/bin
	I1206 19:47:45.573454  114270 out.go:303] Setting JSON to false
	I1206 19:47:45.573539  114270 mustload.go:65] Loading cluster: old-k8s-version-448851
	I1206 19:47:45.573846  114270 config.go:182] Loaded profile config "old-k8s-version-448851": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1206 19:47:45.573912  114270 profile.go:148] Saving config to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/old-k8s-version-448851/config.json ...
	I1206 19:47:45.574071  114270 mustload.go:65] Loading cluster: old-k8s-version-448851
	I1206 19:47:45.574166  114270 config.go:182] Loaded profile config "old-k8s-version-448851": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1206 19:47:45.574193  114270 stop.go:39] StopHost: old-k8s-version-448851
	I1206 19:47:45.574589  114270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:47:45.574631  114270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:47:45.589566  114270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35027
	I1206 19:47:45.590139  114270 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:47:45.590793  114270 main.go:141] libmachine: Using API Version  1
	I1206 19:47:45.590822  114270 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:47:45.591317  114270 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:47:45.593848  114270 out.go:177] * Stopping node "old-k8s-version-448851"  ...
	I1206 19:47:45.595725  114270 main.go:141] libmachine: Stopping "old-k8s-version-448851"...
	I1206 19:47:45.595767  114270 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetState
	I1206 19:47:45.597815  114270 main.go:141] libmachine: (old-k8s-version-448851) Calling .Stop
	I1206 19:47:45.603068  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 0/60
	I1206 19:47:46.605224  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 1/60
	I1206 19:47:47.606602  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 2/60
	I1206 19:47:48.608203  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 3/60
	I1206 19:47:49.609605  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 4/60
	I1206 19:47:50.611468  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 5/60
	I1206 19:47:51.613003  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 6/60
	I1206 19:47:52.614405  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 7/60
	I1206 19:47:53.615862  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 8/60
	I1206 19:47:54.617419  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 9/60
	I1206 19:47:55.619740  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 10/60
	I1206 19:47:56.621003  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 11/60
	I1206 19:47:57.622454  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 12/60
	I1206 19:47:58.623984  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 13/60
	I1206 19:47:59.626186  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 14/60
	I1206 19:48:00.629409  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 15/60
	I1206 19:48:01.631619  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 16/60
	I1206 19:48:02.633313  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 17/60
	I1206 19:48:03.635339  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 18/60
	I1206 19:48:04.637020  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 19/60
	I1206 19:48:05.639183  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 20/60
	I1206 19:48:06.640953  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 21/60
	I1206 19:48:07.642336  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 22/60
	I1206 19:48:08.643659  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 23/60
	I1206 19:48:09.644973  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 24/60
	I1206 19:48:10.646818  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 25/60
	I1206 19:48:11.648133  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 26/60
	I1206 19:48:12.649459  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 27/60
	I1206 19:48:13.651774  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 28/60
	I1206 19:48:14.653398  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 29/60
	I1206 19:48:15.655700  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 30/60
	I1206 19:48:16.657034  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 31/60
	I1206 19:48:17.658457  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 32/60
	I1206 19:48:18.660014  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 33/60
	I1206 19:48:19.661440  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 34/60
	I1206 19:48:20.663253  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 35/60
	I1206 19:48:21.664548  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 36/60
	I1206 19:48:22.665959  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 37/60
	I1206 19:48:23.667361  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 38/60
	I1206 19:48:24.668642  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 39/60
	I1206 19:48:25.671064  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 40/60
	I1206 19:48:26.672393  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 41/60
	I1206 19:48:27.673837  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 42/60
	I1206 19:48:28.675265  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 43/60
	I1206 19:48:29.676792  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 44/60
	I1206 19:48:30.678524  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 45/60
	I1206 19:48:31.679965  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 46/60
	I1206 19:48:32.681504  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 47/60
	I1206 19:48:33.683669  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 48/60
	I1206 19:48:34.685172  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 49/60
	I1206 19:48:35.687316  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 50/60
	I1206 19:48:36.688669  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 51/60
	I1206 19:48:37.690133  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 52/60
	I1206 19:48:38.691952  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 53/60
	I1206 19:48:39.693429  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 54/60
	I1206 19:48:40.695625  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 55/60
	I1206 19:48:41.696910  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 56/60
	I1206 19:48:42.698371  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 57/60
	I1206 19:48:43.699873  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 58/60
	I1206 19:48:44.701270  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 59/60
	I1206 19:48:45.702680  114270 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1206 19:48:45.702758  114270 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1206 19:48:45.702777  114270 retry.go:31] will retry after 1.144006206s: Temporary Error: stop: unable to stop vm, current state "Running"
	I1206 19:48:46.847066  114270 stop.go:39] StopHost: old-k8s-version-448851
	I1206 19:48:46.847560  114270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:48:46.847618  114270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:48:46.862504  114270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32957
	I1206 19:48:46.862960  114270 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:48:46.863452  114270 main.go:141] libmachine: Using API Version  1
	I1206 19:48:46.863472  114270 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:48:46.863866  114270 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:48:46.865767  114270 out.go:177] * Stopping node "old-k8s-version-448851"  ...
	I1206 19:48:46.867444  114270 main.go:141] libmachine: Stopping "old-k8s-version-448851"...
	I1206 19:48:46.867464  114270 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetState
	I1206 19:48:46.869193  114270 main.go:141] libmachine: (old-k8s-version-448851) Calling .Stop
	I1206 19:48:46.872393  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 0/60
	I1206 19:48:47.873982  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 1/60
	I1206 19:48:48.875798  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 2/60
	I1206 19:48:49.877313  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 3/60
	I1206 19:48:50.878721  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 4/60
	I1206 19:48:51.880797  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 5/60
	I1206 19:48:52.882387  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 6/60
	I1206 19:48:53.883783  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 7/60
	I1206 19:48:54.885336  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 8/60
	I1206 19:48:55.886781  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 9/60
	I1206 19:48:56.888768  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 10/60
	I1206 19:48:57.890115  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 11/60
	I1206 19:48:58.891628  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 12/60
	I1206 19:48:59.893153  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 13/60
	I1206 19:49:00.894716  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 14/60
	I1206 19:49:01.896664  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 15/60
	I1206 19:49:02.898328  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 16/60
	I1206 19:49:03.899689  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 17/60
	I1206 19:49:04.901068  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 18/60
	I1206 19:49:05.902900  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 19/60
	I1206 19:49:06.904603  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 20/60
	I1206 19:49:07.905953  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 21/60
	I1206 19:49:08.907172  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 22/60
	I1206 19:49:09.908666  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 23/60
	I1206 19:49:10.910037  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 24/60
	I1206 19:49:11.911871  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 25/60
	I1206 19:49:12.913363  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 26/60
	I1206 19:49:13.914798  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 27/60
	I1206 19:49:14.916160  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 28/60
	I1206 19:49:15.917803  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 29/60
	I1206 19:49:16.920040  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 30/60
	I1206 19:49:17.921614  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 31/60
	I1206 19:49:18.923211  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 32/60
	I1206 19:49:19.924621  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 33/60
	I1206 19:49:20.926117  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 34/60
	I1206 19:49:21.928217  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 35/60
	I1206 19:49:22.929568  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 36/60
	I1206 19:49:23.931774  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 37/60
	I1206 19:49:24.933057  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 38/60
	I1206 19:49:25.934360  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 39/60
	I1206 19:49:26.936407  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 40/60
	I1206 19:49:27.938019  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 41/60
	I1206 19:49:28.939409  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 42/60
	I1206 19:49:29.940797  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 43/60
	I1206 19:49:30.942130  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 44/60
	I1206 19:49:31.943890  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 45/60
	I1206 19:49:32.945350  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 46/60
	I1206 19:49:33.946649  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 47/60
	I1206 19:49:34.947937  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 48/60
	I1206 19:49:35.949128  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 49/60
	I1206 19:49:36.951435  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 50/60
	I1206 19:49:37.952798  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 51/60
	I1206 19:49:38.954464  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 52/60
	I1206 19:49:39.955881  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 53/60
	I1206 19:49:40.957453  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 54/60
	I1206 19:49:41.959559  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 55/60
	I1206 19:49:42.960840  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 56/60
	I1206 19:49:43.962123  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 57/60
	I1206 19:49:44.963634  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 58/60
	I1206 19:49:45.965244  114270 main.go:141] libmachine: (old-k8s-version-448851) Waiting for machine to stop 59/60
	I1206 19:49:46.966432  114270 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1206 19:49:46.966478  114270 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1206 19:49:46.968724  114270 out.go:177] 
	W1206 19:49:46.970099  114270 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1206 19:49:46.970115  114270 out.go:239] * 
	* 
	W1206 19:49:46.973324  114270 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 19:49:46.975175  114270 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p old-k8s-version-448851 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-448851 -n old-k8s-version-448851
E1206 19:49:49.042739   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/enable-default-cni-459609/client.crt: no such file or directory
E1206 19:49:49.048037   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/enable-default-cni-459609/client.crt: no such file or directory
E1206 19:49:49.058292   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/enable-default-cni-459609/client.crt: no such file or directory
E1206 19:49:49.078551   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/enable-default-cni-459609/client.crt: no such file or directory
E1206 19:49:49.118865   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/enable-default-cni-459609/client.crt: no such file or directory
E1206 19:49:49.199251   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/enable-default-cni-459609/client.crt: no such file or directory
E1206 19:49:49.359492   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/enable-default-cni-459609/client.crt: no such file or directory
E1206 19:49:49.680430   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/enable-default-cni-459609/client.crt: no such file or directory
E1206 19:49:50.321296   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/enable-default-cni-459609/client.crt: no such file or directory
E1206 19:49:51.602266   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/enable-default-cni-459609/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-448851 -n old-k8s-version-448851: exit status 3 (18.573238485s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 19:50:05.549560  114895 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.33:22: connect: no route to host
	E1206 19:50:05.549580  114895 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.33:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-448851" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Stop (140.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (140.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-380424 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-380424 --alsologtostderr -v=3: exit status 82 (2m1.704965661s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-380424"  ...
	* Stopping node "default-k8s-diff-port-380424"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 19:48:13.705532  114494 out.go:296] Setting OutFile to fd 1 ...
	I1206 19:48:13.705677  114494 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 19:48:13.705691  114494 out.go:309] Setting ErrFile to fd 2...
	I1206 19:48:13.705695  114494 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 19:48:13.705891  114494 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17740-63652/.minikube/bin
	I1206 19:48:13.706139  114494 out.go:303] Setting JSON to false
	I1206 19:48:13.706239  114494 mustload.go:65] Loading cluster: default-k8s-diff-port-380424
	I1206 19:48:13.706657  114494 config.go:182] Loaded profile config "default-k8s-diff-port-380424": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 19:48:13.706744  114494 profile.go:148] Saving config to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/default-k8s-diff-port-380424/config.json ...
	I1206 19:48:13.706936  114494 mustload.go:65] Loading cluster: default-k8s-diff-port-380424
	I1206 19:48:13.707065  114494 config.go:182] Loaded profile config "default-k8s-diff-port-380424": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 19:48:13.707127  114494 stop.go:39] StopHost: default-k8s-diff-port-380424
	I1206 19:48:13.707569  114494 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:48:13.707618  114494 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:48:13.721925  114494 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41405
	I1206 19:48:13.722470  114494 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:48:13.723091  114494 main.go:141] libmachine: Using API Version  1
	I1206 19:48:13.723127  114494 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:48:13.723484  114494 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:48:13.726108  114494 out.go:177] * Stopping node "default-k8s-diff-port-380424"  ...
	I1206 19:48:13.728090  114494 main.go:141] libmachine: Stopping "default-k8s-diff-port-380424"...
	I1206 19:48:13.728108  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetState
	I1206 19:48:13.729955  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .Stop
	I1206 19:48:13.733795  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 0/60
	I1206 19:48:14.736064  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 1/60
	I1206 19:48:15.737456  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 2/60
	I1206 19:48:16.739882  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 3/60
	I1206 19:48:17.741335  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 4/60
	I1206 19:48:18.743325  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 5/60
	I1206 19:48:19.744753  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 6/60
	I1206 19:48:20.746140  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 7/60
	I1206 19:48:21.747502  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 8/60
	I1206 19:48:22.748797  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 9/60
	I1206 19:48:23.750954  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 10/60
	I1206 19:48:24.752393  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 11/60
	I1206 19:48:25.753843  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 12/60
	I1206 19:48:26.755244  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 13/60
	I1206 19:48:27.756593  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 14/60
	I1206 19:48:28.758623  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 15/60
	I1206 19:48:29.760315  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 16/60
	I1206 19:48:30.761642  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 17/60
	I1206 19:48:31.763166  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 18/60
	I1206 19:48:32.764717  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 19/60
	I1206 19:48:33.766532  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 20/60
	I1206 19:48:34.767901  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 21/60
	I1206 19:48:35.769272  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 22/60
	I1206 19:48:36.770519  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 23/60
	I1206 19:48:37.771930  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 24/60
	I1206 19:48:38.773891  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 25/60
	I1206 19:48:39.775733  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 26/60
	I1206 19:48:40.777161  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 27/60
	I1206 19:48:41.778653  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 28/60
	I1206 19:48:42.779982  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 29/60
	I1206 19:48:43.781999  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 30/60
	I1206 19:48:44.783360  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 31/60
	I1206 19:48:45.784669  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 32/60
	I1206 19:48:46.786120  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 33/60
	I1206 19:48:47.787631  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 34/60
	I1206 19:48:48.789690  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 35/60
	I1206 19:48:49.791176  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 36/60
	I1206 19:48:50.792665  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 37/60
	I1206 19:48:51.794172  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 38/60
	I1206 19:48:52.795930  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 39/60
	I1206 19:48:53.798213  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 40/60
	I1206 19:48:54.799684  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 41/60
	I1206 19:48:55.800964  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 42/60
	I1206 19:48:56.802274  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 43/60
	I1206 19:48:57.803543  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 44/60
	I1206 19:48:58.805205  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 45/60
	I1206 19:48:59.806889  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 46/60
	I1206 19:49:00.808236  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 47/60
	I1206 19:49:01.809712  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 48/60
	I1206 19:49:02.811039  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 49/60
	I1206 19:49:03.813285  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 50/60
	I1206 19:49:04.814696  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 51/60
	I1206 19:49:05.816110  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 52/60
	I1206 19:49:06.817600  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 53/60
	I1206 19:49:07.818951  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 54/60
	I1206 19:49:08.820348  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 55/60
	I1206 19:49:09.821828  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 56/60
	I1206 19:49:10.823174  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 57/60
	I1206 19:49:11.824393  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 58/60
	I1206 19:49:12.825726  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 59/60
	I1206 19:49:13.826809  114494 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1206 19:49:13.826865  114494 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1206 19:49:13.826919  114494 retry.go:31] will retry after 1.39194999s: Temporary Error: stop: unable to stop vm, current state "Running"
	I1206 19:49:15.219399  114494 stop.go:39] StopHost: default-k8s-diff-port-380424
	I1206 19:49:15.219863  114494 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:49:15.219922  114494 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:49:15.234920  114494 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40207
	I1206 19:49:15.235384  114494 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:49:15.235853  114494 main.go:141] libmachine: Using API Version  1
	I1206 19:49:15.235873  114494 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:49:15.236191  114494 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:49:15.238493  114494 out.go:177] * Stopping node "default-k8s-diff-port-380424"  ...
	I1206 19:49:15.240032  114494 main.go:141] libmachine: Stopping "default-k8s-diff-port-380424"...
	I1206 19:49:15.240045  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetState
	I1206 19:49:15.241633  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .Stop
	I1206 19:49:15.244867  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 0/60
	I1206 19:49:16.246715  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 1/60
	I1206 19:49:17.248141  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 2/60
	I1206 19:49:18.250000  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 3/60
	I1206 19:49:19.251541  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 4/60
	I1206 19:49:20.253538  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 5/60
	I1206 19:49:21.255377  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 6/60
	I1206 19:49:22.256751  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 7/60
	I1206 19:49:23.258446  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 8/60
	I1206 19:49:24.259862  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 9/60
	I1206 19:49:25.262304  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 10/60
	I1206 19:49:26.263917  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 11/60
	I1206 19:49:27.265443  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 12/60
	I1206 19:49:28.266958  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 13/60
	I1206 19:49:29.268562  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 14/60
	I1206 19:49:30.270644  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 15/60
	I1206 19:49:31.272126  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 16/60
	I1206 19:49:32.273617  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 17/60
	I1206 19:49:33.274929  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 18/60
	I1206 19:49:34.276424  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 19/60
	I1206 19:49:35.278430  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 20/60
	I1206 19:49:36.279923  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 21/60
	I1206 19:49:37.281747  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 22/60
	I1206 19:49:38.283263  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 23/60
	I1206 19:49:39.284770  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 24/60
	I1206 19:49:40.286768  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 25/60
	I1206 19:49:41.288167  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 26/60
	I1206 19:49:42.289555  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 27/60
	I1206 19:49:43.291200  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 28/60
	I1206 19:49:44.292486  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 29/60
	I1206 19:49:45.294441  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 30/60
	I1206 19:49:46.295895  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 31/60
	I1206 19:49:47.297366  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 32/60
	I1206 19:49:48.298920  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 33/60
	I1206 19:49:49.300228  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 34/60
	I1206 19:49:50.301917  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 35/60
	I1206 19:49:51.303403  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 36/60
	I1206 19:49:52.304869  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 37/60
	I1206 19:49:53.306556  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 38/60
	I1206 19:49:54.307949  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 39/60
	I1206 19:49:55.309882  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 40/60
	I1206 19:49:56.311239  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 41/60
	I1206 19:49:57.312614  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 42/60
	I1206 19:49:58.314134  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 43/60
	I1206 19:49:59.315824  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 44/60
	I1206 19:50:00.318293  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 45/60
	I1206 19:50:01.319902  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 46/60
	I1206 19:50:02.321338  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 47/60
	I1206 19:50:03.322919  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 48/60
	I1206 19:50:04.324271  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 49/60
	I1206 19:50:05.326038  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 50/60
	I1206 19:50:06.327408  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 51/60
	I1206 19:50:07.328988  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 52/60
	I1206 19:50:08.330674  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 53/60
	I1206 19:50:09.331988  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 54/60
	I1206 19:50:10.334092  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 55/60
	I1206 19:50:11.335521  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 56/60
	I1206 19:50:12.337067  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 57/60
	I1206 19:50:13.338418  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 58/60
	I1206 19:50:14.339766  114494 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for machine to stop 59/60
	I1206 19:50:15.340625  114494 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1206 19:50:15.340686  114494 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1206 19:50:15.342828  114494 out.go:177] 
	W1206 19:50:15.344400  114494 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1206 19:50:15.344415  114494 out.go:239] * 
	* 
	W1206 19:50:15.347791  114494 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 19:50:15.349349  114494 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-380424 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-380424 -n default-k8s-diff-port-380424
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-380424 -n default-k8s-diff-port-380424: exit status 3 (18.613977086s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 19:50:33.965615  115188 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.22:22: connect: no route to host
	E1206 19:50:33.965641  115188 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.22:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-380424" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (140.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.68s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-209025 --alsologtostderr -v=3
E1206 19:48:18.407038   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/auto-459609/client.crt: no such file or directory
E1206 19:48:22.657927   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/addons-463584/client.crt: no such file or directory
E1206 19:48:22.686260   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/kindnet-459609/client.crt: no such file or directory
E1206 19:48:28.647536   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/auto-459609/client.crt: no such file or directory
E1206 19:48:43.167065   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/kindnet-459609/client.crt: no such file or directory
E1206 19:48:49.128354   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/auto-459609/client.crt: no such file or directory
E1206 19:48:58.794236   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/calico-459609/client.crt: no such file or directory
E1206 19:48:58.799517   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/calico-459609/client.crt: no such file or directory
E1206 19:48:58.810454   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/calico-459609/client.crt: no such file or directory
E1206 19:48:58.830764   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/calico-459609/client.crt: no such file or directory
E1206 19:48:58.871094   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/calico-459609/client.crt: no such file or directory
E1206 19:48:58.951946   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/calico-459609/client.crt: no such file or directory
E1206 19:48:59.112410   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/calico-459609/client.crt: no such file or directory
E1206 19:48:59.433067   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/calico-459609/client.crt: no such file or directory
E1206 19:49:00.073891   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/calico-459609/client.crt: no such file or directory
E1206 19:49:01.354400   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/calico-459609/client.crt: no such file or directory
E1206 19:49:03.915049   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/calico-459609/client.crt: no such file or directory
E1206 19:49:09.035739   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/calico-459609/client.crt: no such file or directory
E1206 19:49:19.276925   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/calico-459609/client.crt: no such file or directory
E1206 19:49:24.127809   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/kindnet-459609/client.crt: no such file or directory
E1206 19:49:30.089341   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/auto-459609/client.crt: no such file or directory
E1206 19:49:34.368115   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/custom-flannel-459609/client.crt: no such file or directory
E1206 19:49:34.373451   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/custom-flannel-459609/client.crt: no such file or directory
E1206 19:49:34.383810   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/custom-flannel-459609/client.crt: no such file or directory
E1206 19:49:34.404160   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/custom-flannel-459609/client.crt: no such file or directory
E1206 19:49:34.444519   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/custom-flannel-459609/client.crt: no such file or directory
E1206 19:49:34.525331   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/custom-flannel-459609/client.crt: no such file or directory
E1206 19:49:34.685975   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/custom-flannel-459609/client.crt: no such file or directory
E1206 19:49:35.006189   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/custom-flannel-459609/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-209025 --alsologtostderr -v=3: exit status 82 (2m1.192545684s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-209025"  ...
	* Stopping node "embed-certs-209025"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 19:48:17.419166  114581 out.go:296] Setting OutFile to fd 1 ...
	I1206 19:48:17.419431  114581 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 19:48:17.419440  114581 out.go:309] Setting ErrFile to fd 2...
	I1206 19:48:17.419444  114581 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 19:48:17.419615  114581 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17740-63652/.minikube/bin
	I1206 19:48:17.419871  114581 out.go:303] Setting JSON to false
	I1206 19:48:17.419949  114581 mustload.go:65] Loading cluster: embed-certs-209025
	I1206 19:48:17.420331  114581 config.go:182] Loaded profile config "embed-certs-209025": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 19:48:17.420400  114581 profile.go:148] Saving config to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/embed-certs-209025/config.json ...
	I1206 19:48:17.420550  114581 mustload.go:65] Loading cluster: embed-certs-209025
	I1206 19:48:17.420655  114581 config.go:182] Loaded profile config "embed-certs-209025": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 19:48:17.420680  114581 stop.go:39] StopHost: embed-certs-209025
	I1206 19:48:17.421094  114581 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:48:17.421143  114581 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:48:17.436272  114581 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39037
	I1206 19:48:17.436768  114581 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:48:17.437506  114581 main.go:141] libmachine: Using API Version  1
	I1206 19:48:17.437533  114581 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:48:17.437966  114581 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:48:17.439862  114581 out.go:177] * Stopping node "embed-certs-209025"  ...
	I1206 19:48:17.441508  114581 main.go:141] libmachine: Stopping "embed-certs-209025"...
	I1206 19:48:17.441527  114581 main.go:141] libmachine: (embed-certs-209025) Calling .GetState
	I1206 19:48:17.443283  114581 main.go:141] libmachine: (embed-certs-209025) Calling .Stop
	I1206 19:48:17.446167  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 0/60
	I1206 19:48:18.447998  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 1/60
	I1206 19:48:19.449312  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 2/60
	I1206 19:48:20.450743  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 3/60
	I1206 19:48:21.452213  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 4/60
	I1206 19:48:22.454370  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 5/60
	I1206 19:48:23.455729  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 6/60
	I1206 19:48:24.457040  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 7/60
	I1206 19:48:25.458396  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 8/60
	I1206 19:48:26.459970  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 9/60
	I1206 19:48:27.461575  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 10/60
	I1206 19:48:28.463059  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 11/60
	I1206 19:48:29.464447  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 12/60
	I1206 19:48:30.465876  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 13/60
	I1206 19:48:31.467187  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 14/60
	I1206 19:48:32.469546  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 15/60
	I1206 19:48:33.471193  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 16/60
	I1206 19:48:34.472525  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 17/60
	I1206 19:48:35.474000  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 18/60
	I1206 19:48:36.475236  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 19/60
	I1206 19:48:37.477689  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 20/60
	I1206 19:48:38.479084  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 21/60
	I1206 19:48:39.480564  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 22/60
	I1206 19:48:40.481939  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 23/60
	I1206 19:48:41.483426  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 24/60
	I1206 19:48:42.485515  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 25/60
	I1206 19:48:43.487069  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 26/60
	I1206 19:48:44.488319  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 27/60
	I1206 19:48:45.489898  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 28/60
	I1206 19:48:46.491282  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 29/60
	I1206 19:48:47.493538  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 30/60
	I1206 19:48:48.495639  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 31/60
	I1206 19:48:49.497434  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 32/60
	I1206 19:48:50.499100  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 33/60
	I1206 19:48:51.500612  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 34/60
	I1206 19:48:52.502950  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 35/60
	I1206 19:48:53.504486  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 36/60
	I1206 19:48:54.505953  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 37/60
	I1206 19:48:55.507332  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 38/60
	I1206 19:48:56.508792  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 39/60
	I1206 19:48:57.510214  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 40/60
	I1206 19:48:58.511605  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 41/60
	I1206 19:48:59.513128  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 42/60
	I1206 19:49:00.514559  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 43/60
	I1206 19:49:01.515883  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 44/60
	I1206 19:49:02.517943  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 45/60
	I1206 19:49:03.519413  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 46/60
	I1206 19:49:04.521017  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 47/60
	I1206 19:49:05.522416  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 48/60
	I1206 19:49:06.523736  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 49/60
	I1206 19:49:07.525795  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 50/60
	I1206 19:49:08.527189  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 51/60
	I1206 19:49:09.528369  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 52/60
	I1206 19:49:10.529656  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 53/60
	I1206 19:49:11.531571  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 54/60
	I1206 19:49:12.533359  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 55/60
	I1206 19:49:13.534829  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 56/60
	I1206 19:49:14.536001  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 57/60
	I1206 19:49:15.537393  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 58/60
	I1206 19:49:16.538962  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 59/60
	I1206 19:49:17.540521  114581 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1206 19:49:17.540582  114581 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1206 19:49:17.540606  114581 retry.go:31] will retry after 886.450796ms: Temporary Error: stop: unable to stop vm, current state "Running"
	I1206 19:49:18.427663  114581 stop.go:39] StopHost: embed-certs-209025
	I1206 19:49:18.428125  114581 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:49:18.428189  114581 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:49:18.443275  114581 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43971
	I1206 19:49:18.443747  114581 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:49:18.444249  114581 main.go:141] libmachine: Using API Version  1
	I1206 19:49:18.444271  114581 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:49:18.444606  114581 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:49:18.446677  114581 out.go:177] * Stopping node "embed-certs-209025"  ...
	I1206 19:49:18.448233  114581 main.go:141] libmachine: Stopping "embed-certs-209025"...
	I1206 19:49:18.448250  114581 main.go:141] libmachine: (embed-certs-209025) Calling .GetState
	I1206 19:49:18.449846  114581 main.go:141] libmachine: (embed-certs-209025) Calling .Stop
	I1206 19:49:18.452992  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 0/60
	I1206 19:49:19.454452  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 1/60
	I1206 19:49:20.455797  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 2/60
	I1206 19:49:21.457374  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 3/60
	I1206 19:49:22.458782  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 4/60
	I1206 19:49:23.460760  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 5/60
	I1206 19:49:24.462226  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 6/60
	I1206 19:49:25.463685  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 7/60
	I1206 19:49:26.465144  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 8/60
	I1206 19:49:27.466517  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 9/60
	I1206 19:49:28.468576  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 10/60
	I1206 19:49:29.470505  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 11/60
	I1206 19:49:30.471807  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 12/60
	I1206 19:49:31.473203  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 13/60
	I1206 19:49:32.474699  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 14/60
	I1206 19:49:33.476611  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 15/60
	I1206 19:49:34.477965  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 16/60
	I1206 19:49:35.479365  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 17/60
	I1206 19:49:36.480725  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 18/60
	I1206 19:49:37.482280  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 19/60
	I1206 19:49:38.484068  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 20/60
	I1206 19:49:39.485605  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 21/60
	I1206 19:49:40.487078  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 22/60
	I1206 19:49:41.488496  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 23/60
	I1206 19:49:42.489839  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 24/60
	I1206 19:49:43.492176  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 25/60
	I1206 19:49:44.493506  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 26/60
	I1206 19:49:45.495016  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 27/60
	I1206 19:49:46.496415  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 28/60
	I1206 19:49:47.497853  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 29/60
	I1206 19:49:48.499687  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 30/60
	I1206 19:49:49.500957  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 31/60
	I1206 19:49:50.502394  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 32/60
	I1206 19:49:51.503677  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 33/60
	I1206 19:49:52.505013  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 34/60
	I1206 19:49:53.506685  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 35/60
	I1206 19:49:54.508054  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 36/60
	I1206 19:49:55.509512  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 37/60
	I1206 19:49:56.511074  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 38/60
	I1206 19:49:57.512545  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 39/60
	I1206 19:49:58.514301  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 40/60
	I1206 19:49:59.515717  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 41/60
	I1206 19:50:00.517326  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 42/60
	I1206 19:50:01.518724  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 43/60
	I1206 19:50:02.519994  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 44/60
	I1206 19:50:03.521975  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 45/60
	I1206 19:50:04.523461  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 46/60
	I1206 19:50:05.524954  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 47/60
	I1206 19:50:06.526326  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 48/60
	I1206 19:50:07.527949  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 49/60
	I1206 19:50:08.529923  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 50/60
	I1206 19:50:09.531702  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 51/60
	I1206 19:50:10.533074  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 52/60
	I1206 19:50:11.534530  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 53/60
	I1206 19:50:12.535974  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 54/60
	I1206 19:50:13.537809  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 55/60
	I1206 19:50:14.539092  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 56/60
	I1206 19:50:15.540496  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 57/60
	I1206 19:50:16.541758  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 58/60
	I1206 19:50:17.543480  114581 main.go:141] libmachine: (embed-certs-209025) Waiting for machine to stop 59/60
	I1206 19:50:18.544451  114581 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1206 19:50:18.544504  114581 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1206 19:50:18.547065  114581 out.go:177] 
	W1206 19:50:18.548774  114581 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1206 19:50:18.548797  114581 out.go:239] * 
	* 
	W1206 19:50:18.551988  114581 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 19:50:18.553399  114581 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-209025 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-209025 -n embed-certs-209025
E1206 19:50:20.718253   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/calico-459609/client.crt: no such file or directory
E1206 19:50:30.005012   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/enable-default-cni-459609/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-209025 -n embed-certs-209025: exit status 3 (18.481654064s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 19:50:37.037535  115251 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.164:22: connect: no route to host
	E1206 19:50:37.037554  115251 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.164:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-209025" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.68s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-989559 -n no-preload-989559
E1206 19:49:54.163113   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/enable-default-cni-459609/client.crt: no such file or directory
E1206 19:49:54.849954   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/custom-flannel-459609/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-989559 -n no-preload-989559: exit status 3 (3.199899552s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 19:49:56.717632  114938 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.5:22: connect: no route to host
	E1206 19:49:56.717667  114938 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.5:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-989559 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E1206 19:49:59.283898   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/enable-default-cni-459609/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-989559 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153808929s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.5:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-989559 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-989559 -n no-preload-989559
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-989559 -n no-preload-989559: exit status 3 (3.061668167s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 19:50:05.933579  115008 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.5:22: connect: no route to host
	E1206 19:50:05.933608  115008 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.5:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-989559" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-448851 -n old-k8s-version-448851
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-448851 -n old-k8s-version-448851: exit status 3 (3.203655992s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 19:50:08.753540  115048 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.33:22: connect: no route to host
	E1206 19:50:08.753569  115048 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.33:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-448851 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E1206 19:50:09.524460   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/enable-default-cni-459609/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-448851 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.15102196s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.33:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-448851 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-448851 -n old-k8s-version-448851
E1206 19:50:15.330905   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/custom-flannel-459609/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-448851 -n old-k8s-version-448851: exit status 3 (3.060685035s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 19:50:17.965745  115159 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.33:22: connect: no route to host
	E1206 19:50:17.965764  115159 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.33:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-448851" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-380424 -n default-k8s-diff-port-380424
E1206 19:50:34.573939   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-380424 -n default-k8s-diff-port-380424: exit status 3 (3.19991243s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 19:50:37.165495  115326 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.22:22: connect: no route to host
	E1206 19:50:37.165520  115326 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.22:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-380424 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-380424 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.155849449s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.22:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-380424 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-380424 -n default-k8s-diff-port-380424
E1206 19:50:43.359968   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/flannel-459609/client.crt: no such file or directory
E1206 19:50:44.640383   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/flannel-459609/client.crt: no such file or directory
E1206 19:50:46.048498   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/kindnet-459609/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-380424 -n default-k8s-diff-port-380424: exit status 3 (3.06040217s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 19:50:46.381768  115456 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.22:22: connect: no route to host
	E1206 19:50:46.381790  115456 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.22:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-380424" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-209025 -n embed-certs-209025
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-209025 -n embed-certs-209025: exit status 3 (3.201098832s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 19:50:40.237682  115364 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.164:22: connect: no route to host
	E1206 19:50:40.237707  115364 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.164:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-209025 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E1206 19:50:42.081255   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/flannel-459609/client.crt: no such file or directory
E1206 19:50:42.086560   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/flannel-459609/client.crt: no such file or directory
E1206 19:50:42.096843   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/flannel-459609/client.crt: no such file or directory
E1206 19:50:42.117131   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/flannel-459609/client.crt: no such file or directory
E1206 19:50:42.157477   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/flannel-459609/client.crt: no such file or directory
E1206 19:50:42.237953   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/flannel-459609/client.crt: no such file or directory
E1206 19:50:42.398409   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/flannel-459609/client.crt: no such file or directory
E1206 19:50:42.719263   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/flannel-459609/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-209025 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.154699071s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.164:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-209025 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-209025 -n embed-certs-209025
E1206 19:50:47.200874   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/flannel-459609/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-209025 -n embed-certs-209025: exit status 3 (3.06016372s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 19:50:49.453759  115507 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.164:22: connect: no route to host
	E1206 19:50:49.453785  115507 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.164:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-209025" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (543.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1206 20:00:51.525643   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-380424 -n default-k8s-diff-port-380424
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-12-06 20:09:51.376088492 +0000 UTC m=+5365.460582318
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-380424 -n default-k8s-diff-port-380424
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-380424 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-380424 logs -n 25: (1.667409217s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-459609 sudo cat                              | bridge-459609                | jenkins | v1.32.0 | 06 Dec 23 19:46 UTC | 06 Dec 23 19:46 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-459609 sudo                                  | bridge-459609                | jenkins | v1.32.0 | 06 Dec 23 19:46 UTC | 06 Dec 23 19:46 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-459609 sudo                                  | bridge-459609                | jenkins | v1.32.0 | 06 Dec 23 19:46 UTC | 06 Dec 23 19:46 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-459609 sudo                                  | bridge-459609                | jenkins | v1.32.0 | 06 Dec 23 19:46 UTC | 06 Dec 23 19:46 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-459609 sudo find                             | bridge-459609                | jenkins | v1.32.0 | 06 Dec 23 19:46 UTC | 06 Dec 23 19:46 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-459609 sudo crio                             | bridge-459609                | jenkins | v1.32.0 | 06 Dec 23 19:46 UTC | 06 Dec 23 19:46 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-459609                                       | bridge-459609                | jenkins | v1.32.0 | 06 Dec 23 19:46 UTC | 06 Dec 23 19:46 UTC |
	| delete  | -p                                                     | disable-driver-mounts-730405 | jenkins | v1.32.0 | 06 Dec 23 19:46 UTC | 06 Dec 23 19:46 UTC |
	|         | disable-driver-mounts-730405                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-380424 | jenkins | v1.32.0 | 06 Dec 23 19:46 UTC | 06 Dec 23 19:48 UTC |
	|         | default-k8s-diff-port-380424                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-989559             | no-preload-989559            | jenkins | v1.32.0 | 06 Dec 23 19:47 UTC | 06 Dec 23 19:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-989559                                   | no-preload-989559            | jenkins | v1.32.0 | 06 Dec 23 19:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-448851        | old-k8s-version-448851       | jenkins | v1.32.0 | 06 Dec 23 19:47 UTC | 06 Dec 23 19:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-448851                              | old-k8s-version-448851       | jenkins | v1.32.0 | 06 Dec 23 19:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-380424  | default-k8s-diff-port-380424 | jenkins | v1.32.0 | 06 Dec 23 19:48 UTC | 06 Dec 23 19:48 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-380424 | jenkins | v1.32.0 | 06 Dec 23 19:48 UTC |                     |
	|         | default-k8s-diff-port-380424                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-209025            | embed-certs-209025           | jenkins | v1.32.0 | 06 Dec 23 19:48 UTC | 06 Dec 23 19:48 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-209025                                  | embed-certs-209025           | jenkins | v1.32.0 | 06 Dec 23 19:48 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-989559                  | no-preload-989559            | jenkins | v1.32.0 | 06 Dec 23 19:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-989559                                   | no-preload-989559            | jenkins | v1.32.0 | 06 Dec 23 19:50 UTC | 06 Dec 23 20:01 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.1                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-448851             | old-k8s-version-448851       | jenkins | v1.32.0 | 06 Dec 23 19:50 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-448851                              | old-k8s-version-448851       | jenkins | v1.32.0 | 06 Dec 23 19:50 UTC | 06 Dec 23 20:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-380424       | default-k8s-diff-port-380424 | jenkins | v1.32.0 | 06 Dec 23 19:50 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-209025                 | embed-certs-209025           | jenkins | v1.32.0 | 06 Dec 23 19:50 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-380424 | jenkins | v1.32.0 | 06 Dec 23 19:50 UTC | 06 Dec 23 20:00 UTC |
	|         | default-k8s-diff-port-380424                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-209025                                  | embed-certs-209025           | jenkins | v1.32.0 | 06 Dec 23 19:50 UTC | 06 Dec 23 20:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/06 19:50:49
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 19:50:49.512923  115591 out.go:296] Setting OutFile to fd 1 ...
	I1206 19:50:49.513070  115591 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 19:50:49.513079  115591 out.go:309] Setting ErrFile to fd 2...
	I1206 19:50:49.513084  115591 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 19:50:49.513305  115591 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17740-63652/.minikube/bin
	I1206 19:50:49.513900  115591 out.go:303] Setting JSON to false
	I1206 19:50:49.514822  115591 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":9200,"bootTime":1701883050,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 19:50:49.514886  115591 start.go:138] virtualization: kvm guest
	I1206 19:50:49.517831  115591 out.go:177] * [embed-certs-209025] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1206 19:50:49.519496  115591 notify.go:220] Checking for updates...
	I1206 19:50:49.519507  115591 out.go:177]   - MINIKUBE_LOCATION=17740
	I1206 19:50:49.521356  115591 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 19:50:49.523241  115591 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17740-63652/kubeconfig
	I1206 19:50:49.525016  115591 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17740-63652/.minikube
	I1206 19:50:49.526632  115591 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 19:50:49.528148  115591 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 19:50:49.530159  115591 config.go:182] Loaded profile config "embed-certs-209025": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 19:50:49.530586  115591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:50:49.530636  115591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:50:49.545128  115591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46579
	I1206 19:50:49.545881  115591 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:50:49.547345  115591 main.go:141] libmachine: Using API Version  1
	I1206 19:50:49.547375  115591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:50:49.547739  115591 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:50:49.547926  115591 main.go:141] libmachine: (embed-certs-209025) Calling .DriverName
	I1206 19:50:49.548144  115591 driver.go:392] Setting default libvirt URI to qemu:///system
	I1206 19:50:49.548458  115591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:50:49.548506  115591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:50:49.562767  115591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42919
	I1206 19:50:49.563139  115591 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:50:49.563567  115591 main.go:141] libmachine: Using API Version  1
	I1206 19:50:49.563588  115591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:50:49.563913  115591 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:50:49.564112  115591 main.go:141] libmachine: (embed-certs-209025) Calling .DriverName
	I1206 19:50:49.600267  115591 out.go:177] * Using the kvm2 driver based on existing profile
	I1206 19:50:49.601977  115591 start.go:298] selected driver: kvm2
	I1206 19:50:49.601996  115591 start.go:902] validating driver "kvm2" against &{Name:embed-certs-209025 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-209025 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.164 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 19:50:49.602089  115591 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 19:50:49.602812  115591 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 19:50:49.602891  115591 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17740-63652/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1206 19:50:49.617831  115591 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1206 19:50:49.618234  115591 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 19:50:49.618296  115591 cni.go:84] Creating CNI manager for ""
	I1206 19:50:49.618306  115591 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 19:50:49.618316  115591 start_flags.go:323] config:
	{Name:embed-certs-209025 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-209025 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.164 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikub
e-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 19:50:49.618468  115591 iso.go:125] acquiring lock: {Name:mk6e9c7dc90243dab7d2a6f322b4b6abe4dff6ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 19:50:49.620428  115591 out.go:177] * Starting control plane node embed-certs-209025 in cluster embed-certs-209025
	I1206 19:50:46.558601  115497 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1206 19:50:46.558636  115497 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1206 19:50:46.558644  115497 cache.go:56] Caching tarball of preloaded images
	I1206 19:50:46.558714  115497 preload.go:174] Found /home/jenkins/minikube-integration/17740-63652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1206 19:50:46.558724  115497 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1206 19:50:46.558837  115497 profile.go:148] Saving config to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/default-k8s-diff-port-380424/config.json ...
	I1206 19:50:46.559024  115497 start.go:365] acquiring machines lock for default-k8s-diff-port-380424: {Name:mk49ce640266d8c664a871ed4989f65c26b6fae1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1206 19:50:49.622242  115591 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1206 19:50:49.622298  115591 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1206 19:50:49.622320  115591 cache.go:56] Caching tarball of preloaded images
	I1206 19:50:49.622419  115591 preload.go:174] Found /home/jenkins/minikube-integration/17740-63652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1206 19:50:49.622431  115591 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1206 19:50:49.622525  115591 profile.go:148] Saving config to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/embed-certs-209025/config.json ...
	I1206 19:50:49.622798  115591 start.go:365] acquiring machines lock for embed-certs-209025: {Name:mk49ce640266d8c664a871ed4989f65c26b6fae1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1206 19:50:51.693503  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:50:54.765519  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:51:00.845535  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:51:03.917509  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:51:09.997591  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:51:13.069427  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:51:19.149482  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:51:22.221565  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:51:28.301531  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:51:31.373569  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:51:37.453523  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:51:40.525531  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:51:46.605538  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:51:49.677544  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:51:55.757544  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:51:58.829552  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:52:04.909569  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:52:07.981555  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:52:14.061549  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:52:17.133576  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:52:23.213558  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:52:26.285482  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:52:32.365550  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:52:35.437574  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:52:41.517473  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:52:44.589458  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:52:50.669534  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:52:53.741496  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:52:59.821528  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:53:02.893489  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:53:08.973534  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:53:12.045527  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:53:18.125473  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:53:21.197472  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:53:27.277533  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:53:30.349580  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:53:36.429514  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:53:39.501584  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:53:45.581524  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:53:48.653547  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:53:54.733543  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:53:57.805491  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:54:03.885571  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:54:06.957565  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:54:13.037470  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:54:16.109461  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:54:22.189477  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:54:25.261563  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:54:31.341534  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:54:34.413513  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:54:40.493530  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:54:43.497878  115217 start.go:369] acquired machines lock for "old-k8s-version-448851" in 4m25.369261381s
	I1206 19:54:43.497937  115217 start.go:96] Skipping create...Using existing machine configuration
	I1206 19:54:43.497949  115217 fix.go:54] fixHost starting: 
	I1206 19:54:43.498301  115217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:54:43.498331  115217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:54:43.513072  115217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33051
	I1206 19:54:43.513520  115217 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:54:43.514005  115217 main.go:141] libmachine: Using API Version  1
	I1206 19:54:43.514035  115217 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:54:43.514375  115217 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:54:43.514571  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .DriverName
	I1206 19:54:43.514716  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetState
	I1206 19:54:43.516245  115217 fix.go:102] recreateIfNeeded on old-k8s-version-448851: state=Stopped err=<nil>
	I1206 19:54:43.516266  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .DriverName
	W1206 19:54:43.516391  115217 fix.go:128] unexpected machine state, will restart: <nil>
	I1206 19:54:43.518413  115217 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-448851" ...
	I1206 19:54:43.495395  115078 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1206 19:54:43.495445  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHHostname
	I1206 19:54:43.497720  115078 machine.go:91] provisioned docker machine in 4m37.37101565s
	I1206 19:54:43.497766  115078 fix.go:56] fixHost completed within 4m37.395231745s
	I1206 19:54:43.497773  115078 start.go:83] releasing machines lock for "no-preload-989559", held for 4m37.395253694s
	W1206 19:54:43.497813  115078 start.go:694] error starting host: provision: host is not running
	W1206 19:54:43.497949  115078 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1206 19:54:43.497960  115078 start.go:709] Will try again in 5 seconds ...
	I1206 19:54:43.519752  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .Start
	I1206 19:54:43.519905  115217 main.go:141] libmachine: (old-k8s-version-448851) Ensuring networks are active...
	I1206 19:54:43.520785  115217 main.go:141] libmachine: (old-k8s-version-448851) Ensuring network default is active
	I1206 19:54:43.521155  115217 main.go:141] libmachine: (old-k8s-version-448851) Ensuring network mk-old-k8s-version-448851 is active
	I1206 19:54:43.521477  115217 main.go:141] libmachine: (old-k8s-version-448851) Getting domain xml...
	I1206 19:54:43.522123  115217 main.go:141] libmachine: (old-k8s-version-448851) Creating domain...
	I1206 19:54:44.758967  115217 main.go:141] libmachine: (old-k8s-version-448851) Waiting to get IP...
	I1206 19:54:44.759812  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:54:44.760194  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | unable to find current IP address of domain old-k8s-version-448851 in network mk-old-k8s-version-448851
	I1206 19:54:44.760255  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | I1206 19:54:44.760156  116186 retry.go:31] will retry after 298.997725ms: waiting for machine to come up
	I1206 19:54:45.061071  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:54:45.061521  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | unable to find current IP address of domain old-k8s-version-448851 in network mk-old-k8s-version-448851
	I1206 19:54:45.061545  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | I1206 19:54:45.061474  116186 retry.go:31] will retry after 338.263286ms: waiting for machine to come up
	I1206 19:54:45.401161  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:54:45.401614  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | unable to find current IP address of domain old-k8s-version-448851 in network mk-old-k8s-version-448851
	I1206 19:54:45.401641  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | I1206 19:54:45.401572  116186 retry.go:31] will retry after 468.987471ms: waiting for machine to come up
	I1206 19:54:45.872203  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:54:45.872644  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | unable to find current IP address of domain old-k8s-version-448851 in network mk-old-k8s-version-448851
	I1206 19:54:45.872675  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | I1206 19:54:45.872586  116186 retry.go:31] will retry after 447.252306ms: waiting for machine to come up
	I1206 19:54:46.321277  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:54:46.321583  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | unable to find current IP address of domain old-k8s-version-448851 in network mk-old-k8s-version-448851
	I1206 19:54:46.321609  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | I1206 19:54:46.321549  116186 retry.go:31] will retry after 591.206607ms: waiting for machine to come up
	I1206 19:54:46.913936  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:54:46.914351  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | unable to find current IP address of domain old-k8s-version-448851 in network mk-old-k8s-version-448851
	I1206 19:54:46.914412  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | I1206 19:54:46.914260  116186 retry.go:31] will retry after 888.979547ms: waiting for machine to come up
	I1206 19:54:47.805332  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:54:47.805783  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | unable to find current IP address of domain old-k8s-version-448851 in network mk-old-k8s-version-448851
	I1206 19:54:47.805814  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | I1206 19:54:47.805722  116186 retry.go:31] will retry after 1.088490978s: waiting for machine to come up
	I1206 19:54:48.499603  115078 start.go:365] acquiring machines lock for no-preload-989559: {Name:mk49ce640266d8c664a871ed4989f65c26b6fae1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1206 19:54:48.895892  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:54:48.896316  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | unable to find current IP address of domain old-k8s-version-448851 in network mk-old-k8s-version-448851
	I1206 19:54:48.896347  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | I1206 19:54:48.896249  116186 retry.go:31] will retry after 1.145932913s: waiting for machine to come up
	I1206 19:54:50.043740  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:54:50.044169  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | unable to find current IP address of domain old-k8s-version-448851 in network mk-old-k8s-version-448851
	I1206 19:54:50.044199  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | I1206 19:54:50.044136  116186 retry.go:31] will retry after 1.302468984s: waiting for machine to come up
	I1206 19:54:51.347696  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:54:51.348093  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | unable to find current IP address of domain old-k8s-version-448851 in network mk-old-k8s-version-448851
	I1206 19:54:51.348124  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | I1206 19:54:51.348039  116186 retry.go:31] will retry after 2.099836852s: waiting for machine to come up
	I1206 19:54:53.450166  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:54:53.450638  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | unable to find current IP address of domain old-k8s-version-448851 in network mk-old-k8s-version-448851
	I1206 19:54:53.450678  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | I1206 19:54:53.450566  116186 retry.go:31] will retry after 1.877757048s: waiting for machine to come up
	I1206 19:54:55.331257  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:54:55.331697  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | unable to find current IP address of domain old-k8s-version-448851 in network mk-old-k8s-version-448851
	I1206 19:54:55.331752  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | I1206 19:54:55.331671  116186 retry.go:31] will retry after 3.399849348s: waiting for machine to come up
	I1206 19:54:58.733325  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:54:58.733712  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | unable to find current IP address of domain old-k8s-version-448851 in network mk-old-k8s-version-448851
	I1206 19:54:58.733736  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | I1206 19:54:58.733664  116186 retry.go:31] will retry after 4.308323214s: waiting for machine to come up
	I1206 19:55:04.350333  115497 start.go:369] acquired machines lock for "default-k8s-diff-port-380424" in 4m17.791271724s
	I1206 19:55:04.350411  115497 start.go:96] Skipping create...Using existing machine configuration
	I1206 19:55:04.350426  115497 fix.go:54] fixHost starting: 
	I1206 19:55:04.350878  115497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:55:04.350927  115497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:55:04.367462  115497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36653
	I1206 19:55:04.367935  115497 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:55:04.368546  115497 main.go:141] libmachine: Using API Version  1
	I1206 19:55:04.368580  115497 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:55:04.368972  115497 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:55:04.369197  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .DriverName
	I1206 19:55:04.369417  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetState
	I1206 19:55:04.370940  115497 fix.go:102] recreateIfNeeded on default-k8s-diff-port-380424: state=Stopped err=<nil>
	I1206 19:55:04.370982  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .DriverName
	W1206 19:55:04.371135  115497 fix.go:128] unexpected machine state, will restart: <nil>
	I1206 19:55:04.373809  115497 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-380424" ...
	I1206 19:55:03.047055  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.047484  115217 main.go:141] libmachine: (old-k8s-version-448851) Found IP for machine: 192.168.61.33
	I1206 19:55:03.047516  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has current primary IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.047527  115217 main.go:141] libmachine: (old-k8s-version-448851) Reserving static IP address...
	I1206 19:55:03.048083  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "old-k8s-version-448851", mac: "52:54:00:91:ad:26", ip: "192.168.61.33"} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:03.048116  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | skip adding static IP to network mk-old-k8s-version-448851 - found existing host DHCP lease matching {name: "old-k8s-version-448851", mac: "52:54:00:91:ad:26", ip: "192.168.61.33"}
	I1206 19:55:03.048135  115217 main.go:141] libmachine: (old-k8s-version-448851) Reserved static IP address: 192.168.61.33
	I1206 19:55:03.048146  115217 main.go:141] libmachine: (old-k8s-version-448851) Waiting for SSH to be available...
	I1206 19:55:03.048158  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | Getting to WaitForSSH function...
	I1206 19:55:03.050347  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.050661  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:03.050682  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.050793  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | Using SSH client type: external
	I1206 19:55:03.050872  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | Using SSH private key: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/old-k8s-version-448851/id_rsa (-rw-------)
	I1206 19:55:03.050913  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.33 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17740-63652/.minikube/machines/old-k8s-version-448851/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1206 19:55:03.050935  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | About to run SSH command:
	I1206 19:55:03.050956  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | exit 0
	I1206 19:55:03.137326  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | SSH cmd err, output: <nil>: 
	I1206 19:55:03.137753  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetConfigRaw
	I1206 19:55:03.138415  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetIP
	I1206 19:55:03.140903  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.141314  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:03.141341  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.141671  115217 profile.go:148] Saving config to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/old-k8s-version-448851/config.json ...
	I1206 19:55:03.141899  115217 machine.go:88] provisioning docker machine ...
	I1206 19:55:03.141924  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .DriverName
	I1206 19:55:03.142133  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetMachineName
	I1206 19:55:03.142284  115217 buildroot.go:166] provisioning hostname "old-k8s-version-448851"
	I1206 19:55:03.142305  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetMachineName
	I1206 19:55:03.142511  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHHostname
	I1206 19:55:03.144778  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.145119  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:03.145144  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.145289  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHPort
	I1206 19:55:03.145451  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 19:55:03.145582  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 19:55:03.145705  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHUsername
	I1206 19:55:03.145829  115217 main.go:141] libmachine: Using SSH client type: native
	I1206 19:55:03.146319  115217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.33 22 <nil> <nil>}
	I1206 19:55:03.146343  115217 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-448851 && echo "old-k8s-version-448851" | sudo tee /etc/hostname
	I1206 19:55:03.270447  115217 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-448851
	
	I1206 19:55:03.270490  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHHostname
	I1206 19:55:03.273453  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.273769  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:03.273802  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.273957  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHPort
	I1206 19:55:03.274148  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 19:55:03.274326  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 19:55:03.274426  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHUsername
	I1206 19:55:03.274576  115217 main.go:141] libmachine: Using SSH client type: native
	I1206 19:55:03.274893  115217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.33 22 <nil> <nil>}
	I1206 19:55:03.274910  115217 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-448851' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-448851/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-448851' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 19:55:03.395200  115217 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1206 19:55:03.395232  115217 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17740-63652/.minikube CaCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17740-63652/.minikube}
	I1206 19:55:03.395281  115217 buildroot.go:174] setting up certificates
	I1206 19:55:03.395298  115217 provision.go:83] configureAuth start
	I1206 19:55:03.395320  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetMachineName
	I1206 19:55:03.395585  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetIP
	I1206 19:55:03.397989  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.398373  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:03.398405  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.398547  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHHostname
	I1206 19:55:03.400869  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.401196  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:03.401223  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.401369  115217 provision.go:138] copyHostCerts
	I1206 19:55:03.401492  115217 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem, removing ...
	I1206 19:55:03.401513  115217 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem
	I1206 19:55:03.401600  115217 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem (1082 bytes)
	I1206 19:55:03.401718  115217 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem, removing ...
	I1206 19:55:03.401730  115217 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem
	I1206 19:55:03.401778  115217 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem (1123 bytes)
	I1206 19:55:03.401857  115217 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem, removing ...
	I1206 19:55:03.401867  115217 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem
	I1206 19:55:03.401899  115217 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem (1679 bytes)
	I1206 19:55:03.401971  115217 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-448851 san=[192.168.61.33 192.168.61.33 localhost 127.0.0.1 minikube old-k8s-version-448851]
	I1206 19:55:03.655010  115217 provision.go:172] copyRemoteCerts
	I1206 19:55:03.655082  115217 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 19:55:03.655110  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHHostname
	I1206 19:55:03.657860  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.658301  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:03.658336  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.658529  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHPort
	I1206 19:55:03.658738  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 19:55:03.658914  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHUsername
	I1206 19:55:03.659068  115217 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/old-k8s-version-448851/id_rsa Username:docker}
	I1206 19:55:03.742021  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 19:55:03.765284  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1206 19:55:03.788562  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 19:55:03.811692  115217 provision.go:86] duration metric: configureAuth took 416.376347ms
	I1206 19:55:03.811722  115217 buildroot.go:189] setting minikube options for container-runtime
	I1206 19:55:03.811943  115217 config.go:182] Loaded profile config "old-k8s-version-448851": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1206 19:55:03.812058  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHHostname
	I1206 19:55:03.814518  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.814898  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:03.814934  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.815149  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHPort
	I1206 19:55:03.815371  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 19:55:03.815541  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 19:55:03.815663  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHUsername
	I1206 19:55:03.815787  115217 main.go:141] libmachine: Using SSH client type: native
	I1206 19:55:03.816094  115217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.33 22 <nil> <nil>}
	I1206 19:55:03.816121  115217 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 19:55:04.115752  115217 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 19:55:04.115780  115217 machine.go:91] provisioned docker machine in 973.864642ms
	I1206 19:55:04.115790  115217 start.go:300] post-start starting for "old-k8s-version-448851" (driver="kvm2")
	I1206 19:55:04.115802  115217 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 19:55:04.115825  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .DriverName
	I1206 19:55:04.116197  115217 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 19:55:04.116226  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHHostname
	I1206 19:55:04.119234  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:04.119559  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:04.119586  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:04.119801  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHPort
	I1206 19:55:04.120047  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 19:55:04.120228  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHUsername
	I1206 19:55:04.120391  115217 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/old-k8s-version-448851/id_rsa Username:docker}
	I1206 19:55:04.203195  115217 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 19:55:04.207210  115217 info.go:137] Remote host: Buildroot 2021.02.12
	I1206 19:55:04.207238  115217 filesync.go:126] Scanning /home/jenkins/minikube-integration/17740-63652/.minikube/addons for local assets ...
	I1206 19:55:04.207315  115217 filesync.go:126] Scanning /home/jenkins/minikube-integration/17740-63652/.minikube/files for local assets ...
	I1206 19:55:04.207392  115217 filesync.go:149] local asset: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem -> 708342.pem in /etc/ssl/certs
	I1206 19:55:04.207475  115217 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 19:55:04.215469  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem --> /etc/ssl/certs/708342.pem (1708 bytes)
	I1206 19:55:04.238407  115217 start.go:303] post-start completed in 122.598676ms
	I1206 19:55:04.238437  115217 fix.go:56] fixHost completed within 20.740486511s
	I1206 19:55:04.238467  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHHostname
	I1206 19:55:04.241147  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:04.241522  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:04.241558  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:04.241720  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHPort
	I1206 19:55:04.241992  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 19:55:04.242187  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 19:55:04.242346  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHUsername
	I1206 19:55:04.242488  115217 main.go:141] libmachine: Using SSH client type: native
	I1206 19:55:04.242801  115217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.33 22 <nil> <nil>}
	I1206 19:55:04.242813  115217 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1206 19:55:04.350154  115217 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701892504.298339573
	
	I1206 19:55:04.350177  115217 fix.go:206] guest clock: 1701892504.298339573
	I1206 19:55:04.350185  115217 fix.go:219] Guest: 2023-12-06 19:55:04.298339573 +0000 UTC Remote: 2023-12-06 19:55:04.238442081 +0000 UTC m=+286.264851054 (delta=59.897492ms)
	I1206 19:55:04.350206  115217 fix.go:190] guest clock delta is within tolerance: 59.897492ms
	I1206 19:55:04.350212  115217 start.go:83] releasing machines lock for "old-k8s-version-448851", held for 20.852295937s
	I1206 19:55:04.350240  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .DriverName
	I1206 19:55:04.350562  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetIP
	I1206 19:55:04.353070  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:04.353519  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:04.353547  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:04.353732  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .DriverName
	I1206 19:55:04.354331  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .DriverName
	I1206 19:55:04.354552  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .DriverName
	I1206 19:55:04.354641  115217 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 19:55:04.354689  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHHostname
	I1206 19:55:04.354815  115217 ssh_runner.go:195] Run: cat /version.json
	I1206 19:55:04.354844  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHHostname
	I1206 19:55:04.357316  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:04.357558  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:04.357703  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:04.357734  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:04.357841  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHPort
	I1206 19:55:04.358006  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:04.358031  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 19:55:04.358052  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:04.358161  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHPort
	I1206 19:55:04.358241  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHUsername
	I1206 19:55:04.358322  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 19:55:04.358448  115217 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/old-k8s-version-448851/id_rsa Username:docker}
	I1206 19:55:04.358575  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHUsername
	I1206 19:55:04.358734  115217 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/old-k8s-version-448851/id_rsa Username:docker}
	I1206 19:55:04.469402  115217 ssh_runner.go:195] Run: systemctl --version
	I1206 19:55:04.475231  115217 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 19:55:04.618312  115217 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 19:55:04.625482  115217 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 19:55:04.625557  115217 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 19:55:04.640251  115217 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1206 19:55:04.640281  115217 start.go:475] detecting cgroup driver to use...
	I1206 19:55:04.640368  115217 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 19:55:04.654153  115217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 19:55:04.666295  115217 docker.go:203] disabling cri-docker service (if available) ...
	I1206 19:55:04.666387  115217 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 19:55:04.678579  115217 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 19:55:04.692472  115217 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 19:55:04.793090  115217 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 19:55:04.909331  115217 docker.go:219] disabling docker service ...
	I1206 19:55:04.909399  115217 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 19:55:04.922479  115217 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 19:55:04.934122  115217 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 19:55:05.048844  115217 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 19:55:05.156415  115217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 19:55:05.172525  115217 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 19:55:05.190303  115217 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1206 19:55:05.190363  115217 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:55:05.199967  115217 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 19:55:05.200048  115217 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:55:05.209725  115217 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:55:05.218770  115217 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:55:05.227835  115217 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 19:55:05.237006  115217 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 19:55:05.244839  115217 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1206 19:55:05.244899  115217 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1206 19:55:05.256528  115217 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 19:55:05.266360  115217 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 19:55:05.387203  115217 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 19:55:05.555553  115217 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 19:55:05.555668  115217 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 19:55:05.564619  115217 start.go:543] Will wait 60s for crictl version
	I1206 19:55:05.564682  115217 ssh_runner.go:195] Run: which crictl
	I1206 19:55:05.568979  115217 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1206 19:55:05.611883  115217 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1206 19:55:05.611986  115217 ssh_runner.go:195] Run: crio --version
	I1206 19:55:05.666757  115217 ssh_runner.go:195] Run: crio --version
	I1206 19:55:05.725942  115217 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1206 19:55:04.375626  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .Start
	I1206 19:55:04.375819  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Ensuring networks are active...
	I1206 19:55:04.376548  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Ensuring network default is active
	I1206 19:55:04.376923  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Ensuring network mk-default-k8s-diff-port-380424 is active
	I1206 19:55:04.377416  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Getting domain xml...
	I1206 19:55:04.378003  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Creating domain...
	I1206 19:55:05.667493  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting to get IP...
	I1206 19:55:05.668629  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:05.669112  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | unable to find current IP address of domain default-k8s-diff-port-380424 in network mk-default-k8s-diff-port-380424
	I1206 19:55:05.669148  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | I1206 19:55:05.669064  116315 retry.go:31] will retry after 259.414087ms: waiting for machine to come up
	I1206 19:55:05.930773  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:05.931201  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | unable to find current IP address of domain default-k8s-diff-port-380424 in network mk-default-k8s-diff-port-380424
	I1206 19:55:05.931232  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | I1206 19:55:05.931129  116315 retry.go:31] will retry after 319.702286ms: waiting for machine to come up
	I1206 19:55:06.252911  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:06.253423  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | unable to find current IP address of domain default-k8s-diff-port-380424 in network mk-default-k8s-diff-port-380424
	I1206 19:55:06.253458  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | I1206 19:55:06.253363  116315 retry.go:31] will retry after 403.286071ms: waiting for machine to come up
	I1206 19:55:05.727444  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetIP
	I1206 19:55:05.730503  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:05.730864  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:05.730900  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:05.731151  115217 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1206 19:55:05.735826  115217 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 19:55:05.748254  115217 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1206 19:55:05.748312  115217 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 19:55:05.799380  115217 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1206 19:55:05.799468  115217 ssh_runner.go:195] Run: which lz4
	I1206 19:55:05.803715  115217 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1206 19:55:05.808059  115217 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1206 19:55:05.808093  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1206 19:55:07.624367  115217 crio.go:444] Took 1.820689 seconds to copy over tarball
	I1206 19:55:07.624452  115217 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1206 19:55:06.658075  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:06.658763  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | unable to find current IP address of domain default-k8s-diff-port-380424 in network mk-default-k8s-diff-port-380424
	I1206 19:55:06.658800  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | I1206 19:55:06.658710  116315 retry.go:31] will retry after 572.663186ms: waiting for machine to come up
	I1206 19:55:07.233562  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:07.233898  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | unable to find current IP address of domain default-k8s-diff-port-380424 in network mk-default-k8s-diff-port-380424
	I1206 19:55:07.233927  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | I1206 19:55:07.233861  116315 retry.go:31] will retry after 762.563485ms: waiting for machine to come up
	I1206 19:55:07.997980  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:07.998424  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | unable to find current IP address of domain default-k8s-diff-port-380424 in network mk-default-k8s-diff-port-380424
	I1206 19:55:07.998453  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | I1206 19:55:07.998368  116315 retry.go:31] will retry after 885.694635ms: waiting for machine to come up
	I1206 19:55:08.885521  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:08.885957  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | unable to find current IP address of domain default-k8s-diff-port-380424 in network mk-default-k8s-diff-port-380424
	I1206 19:55:08.885983  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | I1206 19:55:08.885918  116315 retry.go:31] will retry after 924.594214ms: waiting for machine to come up
	I1206 19:55:09.812796  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:09.813271  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | unable to find current IP address of domain default-k8s-diff-port-380424 in network mk-default-k8s-diff-port-380424
	I1206 19:55:09.813305  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | I1206 19:55:09.813205  116315 retry.go:31] will retry after 1.485258028s: waiting for machine to come up
	I1206 19:55:11.300830  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:11.301385  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | unable to find current IP address of domain default-k8s-diff-port-380424 in network mk-default-k8s-diff-port-380424
	I1206 19:55:11.301424  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | I1206 19:55:11.301315  116315 retry.go:31] will retry after 1.232055429s: waiting for machine to come up
	I1206 19:55:10.452537  115217 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.828052972s)
	I1206 19:55:10.452565  115217 crio.go:451] Took 2.828166 seconds to extract the tarball
	I1206 19:55:10.452574  115217 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1206 19:55:10.493620  115217 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 19:55:10.539181  115217 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1206 19:55:10.539218  115217 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1206 19:55:10.539312  115217 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1206 19:55:10.539318  115217 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 19:55:10.539358  115217 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1206 19:55:10.539364  115217 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1206 19:55:10.539515  115217 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1206 19:55:10.539529  115217 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1206 19:55:10.539331  115217 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1206 19:55:10.539572  115217 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1206 19:55:10.540875  115217 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1206 19:55:10.540888  115217 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1206 19:55:10.540931  115217 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1206 19:55:10.540936  115217 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1206 19:55:10.540879  115217 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1206 19:55:10.540875  115217 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1206 19:55:10.540880  115217 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1206 19:55:10.540879  115217 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 19:55:10.725027  115217 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1206 19:55:10.762761  115217 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1206 19:55:10.762810  115217 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1206 19:55:10.762862  115217 ssh_runner.go:195] Run: which crictl
	I1206 19:55:10.763731  115217 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 19:55:10.766312  115217 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1206 19:55:10.768181  115217 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1206 19:55:10.773115  115217 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1206 19:55:10.829543  115217 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1206 19:55:10.841186  115217 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1206 19:55:10.856309  115217 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1206 19:55:10.873212  115217 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1206 19:55:10.983390  115217 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1206 19:55:10.983444  115217 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1206 19:55:10.983463  115217 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1206 19:55:10.983498  115217 ssh_runner.go:195] Run: which crictl
	I1206 19:55:10.983510  115217 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1206 19:55:10.983530  115217 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1206 19:55:10.983564  115217 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1206 19:55:10.983628  115217 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1206 19:55:10.983663  115217 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1206 19:55:10.983672  115217 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1206 19:55:10.983700  115217 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1206 19:55:10.983712  115217 ssh_runner.go:195] Run: which crictl
	I1206 19:55:10.983567  115217 ssh_runner.go:195] Run: which crictl
	I1206 19:55:10.983730  115217 ssh_runner.go:195] Run: which crictl
	I1206 19:55:10.983802  115217 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1206 19:55:10.983829  115217 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1206 19:55:10.983861  115217 ssh_runner.go:195] Run: which crictl
	I1206 19:55:11.009102  115217 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1206 19:55:11.009135  115217 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1206 19:55:11.009152  115217 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1206 19:55:11.009211  115217 ssh_runner.go:195] Run: which crictl
	I1206 19:55:11.009254  115217 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1206 19:55:11.009273  115217 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1206 19:55:11.009307  115217 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1206 19:55:11.009342  115217 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1206 19:55:11.009355  115217 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1206 19:55:11.009388  115217 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1206 19:55:11.009390  115217 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1206 19:55:11.130238  115217 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1206 19:55:11.158336  115217 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1206 19:55:11.158375  115217 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1206 19:55:11.158431  115217 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1206 19:55:11.158438  115217 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1206 19:55:11.158507  115217 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1206 19:55:12.535831  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:12.536331  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | unable to find current IP address of domain default-k8s-diff-port-380424 in network mk-default-k8s-diff-port-380424
	I1206 19:55:12.536374  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | I1206 19:55:12.536253  116315 retry.go:31] will retry after 1.865303927s: waiting for machine to come up
	I1206 19:55:14.402935  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:14.403326  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | unable to find current IP address of domain default-k8s-diff-port-380424 in network mk-default-k8s-diff-port-380424
	I1206 19:55:14.403354  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | I1206 19:55:14.403268  116315 retry.go:31] will retry after 1.960994282s: waiting for machine to come up
	I1206 19:55:16.366289  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:16.366763  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | unable to find current IP address of domain default-k8s-diff-port-380424 in network mk-default-k8s-diff-port-380424
	I1206 19:55:16.366792  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | I1206 19:55:16.366689  116315 retry.go:31] will retry after 2.933451161s: waiting for machine to come up
	I1206 19:55:13.478881  115217 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0: (2.320421557s)
	I1206 19:55:13.478937  115217 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1206 19:55:13.478892  115217 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (2.469478111s)
	I1206 19:55:13.478983  115217 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1206 19:55:13.479043  115217 cache_images.go:92] LoadImages completed in 2.939808867s
	W1206 19:55:13.479149  115217 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0: no such file or directory
	I1206 19:55:13.479228  115217 ssh_runner.go:195] Run: crio config
	I1206 19:55:13.543270  115217 cni.go:84] Creating CNI manager for ""
	I1206 19:55:13.543302  115217 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 19:55:13.543328  115217 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1206 19:55:13.543355  115217 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.33 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-448851 NodeName:old-k8s-version-448851 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.33"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.33 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1206 19:55:13.543557  115217 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.33
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-448851"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.33
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.33"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-448851
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.61.33:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 19:55:13.543700  115217 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-448851 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.33
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-448851 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1206 19:55:13.543776  115217 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1206 19:55:13.554524  115217 binaries.go:44] Found k8s binaries, skipping transfer
	I1206 19:55:13.554611  115217 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 19:55:13.566752  115217 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I1206 19:55:13.586027  115217 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 19:55:13.603800  115217 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I1206 19:55:13.627098  115217 ssh_runner.go:195] Run: grep 192.168.61.33	control-plane.minikube.internal$ /etc/hosts
	I1206 19:55:13.632470  115217 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.33	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 19:55:13.651452  115217 certs.go:56] Setting up /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/old-k8s-version-448851 for IP: 192.168.61.33
	I1206 19:55:13.651507  115217 certs.go:190] acquiring lock for shared ca certs: {Name:mkf8fbf7b590617ef4dc6c3a4acb742ae26f89ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 19:55:13.651670  115217 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.key
	I1206 19:55:13.651748  115217 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.key
	I1206 19:55:13.651860  115217 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/old-k8s-version-448851/client.key
	I1206 19:55:13.651932  115217 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/old-k8s-version-448851/apiserver.key.efa8c2ad
	I1206 19:55:13.651994  115217 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/old-k8s-version-448851/proxy-client.key
	I1206 19:55:13.652142  115217 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834.pem (1338 bytes)
	W1206 19:55:13.652183  115217 certs.go:433] ignoring /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834_empty.pem, impossibly tiny 0 bytes
	I1206 19:55:13.652201  115217 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 19:55:13.652241  115217 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem (1082 bytes)
	I1206 19:55:13.652283  115217 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem (1123 bytes)
	I1206 19:55:13.652326  115217 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem (1679 bytes)
	I1206 19:55:13.652389  115217 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem (1708 bytes)
	I1206 19:55:13.653344  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/old-k8s-version-448851/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1206 19:55:13.687786  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/old-k8s-version-448851/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 19:55:13.723604  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/old-k8s-version-448851/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 19:55:13.756434  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/old-k8s-version-448851/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 19:55:13.789066  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 19:55:13.821087  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 19:55:13.849840  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 19:55:13.876520  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 19:55:13.901763  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem --> /usr/share/ca-certificates/708342.pem (1708 bytes)
	I1206 19:55:13.932106  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 19:55:13.961708  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834.pem --> /usr/share/ca-certificates/70834.pem (1338 bytes)
	I1206 19:55:13.991586  115217 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 19:55:14.009848  115217 ssh_runner.go:195] Run: openssl version
	I1206 19:55:14.017661  115217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/708342.pem && ln -fs /usr/share/ca-certificates/708342.pem /etc/ssl/certs/708342.pem"
	I1206 19:55:14.031103  115217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/708342.pem
	I1206 19:55:14.037142  115217 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  6 18:50 /usr/share/ca-certificates/708342.pem
	I1206 19:55:14.037212  115217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/708342.pem
	I1206 19:55:14.044737  115217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/708342.pem /etc/ssl/certs/3ec20f2e.0"
	I1206 19:55:14.058296  115217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1206 19:55:14.068591  115217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:55:14.073995  115217 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  6 18:41 /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:55:14.074067  115217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:55:14.079922  115217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1206 19:55:14.090541  115217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/70834.pem && ln -fs /usr/share/ca-certificates/70834.pem /etc/ssl/certs/70834.pem"
	I1206 19:55:14.100915  115217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/70834.pem
	I1206 19:55:14.106692  115217 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  6 18:50 /usr/share/ca-certificates/70834.pem
	I1206 19:55:14.106766  115217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/70834.pem
	I1206 19:55:14.112592  115217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/70834.pem /etc/ssl/certs/51391683.0"
	I1206 19:55:14.122630  115217 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1206 19:55:14.128544  115217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 19:55:14.136649  115217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 19:55:14.143060  115217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 19:55:14.151002  115217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 19:55:14.157202  115217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 19:55:14.163456  115217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 19:55:14.171607  115217 kubeadm.go:404] StartCluster: {Name:old-k8s-version-448851 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-448851 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.33 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 19:55:14.171720  115217 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 19:55:14.171771  115217 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 19:55:14.216630  115217 cri.go:89] found id: ""
	I1206 19:55:14.216712  115217 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 19:55:14.229800  115217 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1206 19:55:14.229832  115217 kubeadm.go:636] restartCluster start
	I1206 19:55:14.229889  115217 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 19:55:14.242347  115217 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:14.243973  115217 kubeconfig.go:92] found "old-k8s-version-448851" server: "https://192.168.61.33:8443"
	I1206 19:55:14.247781  115217 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 19:55:14.257060  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:14.257118  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:14.268619  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:14.268644  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:14.268692  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:14.279803  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:14.780509  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:14.780603  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:14.796116  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:15.280797  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:15.280910  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:15.296260  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:15.779895  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:15.780023  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:15.796115  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:16.280792  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:16.280884  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:16.297258  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:16.780884  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:16.781007  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:16.796430  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:17.279982  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:17.280088  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:17.291102  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:17.780721  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:17.780865  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:17.792253  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:19.302288  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:19.302717  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | unable to find current IP address of domain default-k8s-diff-port-380424 in network mk-default-k8s-diff-port-380424
	I1206 19:55:19.302744  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | I1206 19:55:19.302670  116315 retry.go:31] will retry after 3.226665023s: waiting for machine to come up
	I1206 19:55:18.280684  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:18.280777  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:18.292535  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:18.780650  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:18.780722  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:18.793872  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:19.280431  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:19.280507  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:19.292188  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:19.780793  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:19.780914  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:19.791873  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:20.280527  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:20.280637  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:20.291886  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:20.780810  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:20.780890  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:20.791837  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:21.280389  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:21.280479  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:21.291743  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:21.780252  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:21.780343  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:21.791452  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:22.280013  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:22.280120  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:22.291240  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:22.780451  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:22.780528  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:22.791668  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:23.690245  115591 start.go:369] acquired machines lock for "embed-certs-209025" in 4m34.06740814s
	I1206 19:55:23.690318  115591 start.go:96] Skipping create...Using existing machine configuration
	I1206 19:55:23.690327  115591 fix.go:54] fixHost starting: 
	I1206 19:55:23.690686  115591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:55:23.690728  115591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:55:23.706483  115591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35135
	I1206 19:55:23.706891  115591 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:55:23.707367  115591 main.go:141] libmachine: Using API Version  1
	I1206 19:55:23.707391  115591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:55:23.707744  115591 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:55:23.707925  115591 main.go:141] libmachine: (embed-certs-209025) Calling .DriverName
	I1206 19:55:23.708059  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetState
	I1206 19:55:23.709586  115591 fix.go:102] recreateIfNeeded on embed-certs-209025: state=Stopped err=<nil>
	I1206 19:55:23.709612  115591 main.go:141] libmachine: (embed-certs-209025) Calling .DriverName
	W1206 19:55:23.709803  115591 fix.go:128] unexpected machine state, will restart: <nil>
	I1206 19:55:23.712015  115591 out.go:177] * Restarting existing kvm2 VM for "embed-certs-209025" ...
	I1206 19:55:23.713472  115591 main.go:141] libmachine: (embed-certs-209025) Calling .Start
	I1206 19:55:23.713637  115591 main.go:141] libmachine: (embed-certs-209025) Ensuring networks are active...
	I1206 19:55:23.714335  115591 main.go:141] libmachine: (embed-certs-209025) Ensuring network default is active
	I1206 19:55:23.714639  115591 main.go:141] libmachine: (embed-certs-209025) Ensuring network mk-embed-certs-209025 is active
	I1206 19:55:23.714978  115591 main.go:141] libmachine: (embed-certs-209025) Getting domain xml...
	I1206 19:55:23.715617  115591 main.go:141] libmachine: (embed-certs-209025) Creating domain...
	I1206 19:55:22.530618  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.531092  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has current primary IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.531107  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Found IP for machine: 192.168.72.22
	I1206 19:55:22.531117  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Reserving static IP address...
	I1206 19:55:22.531437  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-380424", mac: "52:54:00:15:24:2b", ip: "192.168.72.22"} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:22.531465  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | skip adding static IP to network mk-default-k8s-diff-port-380424 - found existing host DHCP lease matching {name: "default-k8s-diff-port-380424", mac: "52:54:00:15:24:2b", ip: "192.168.72.22"}
	I1206 19:55:22.531485  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | Getting to WaitForSSH function...
	I1206 19:55:22.531496  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Reserved static IP address: 192.168.72.22
	I1206 19:55:22.531554  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for SSH to be available...
	I1206 19:55:22.533485  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.533729  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:22.533752  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.533853  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | Using SSH client type: external
	I1206 19:55:22.533880  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | Using SSH private key: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/default-k8s-diff-port-380424/id_rsa (-rw-------)
	I1206 19:55:22.533916  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.22 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17740-63652/.minikube/machines/default-k8s-diff-port-380424/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1206 19:55:22.533941  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | About to run SSH command:
	I1206 19:55:22.533957  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | exit 0
	I1206 19:55:22.620864  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | SSH cmd err, output: <nil>: 
	I1206 19:55:22.621194  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetConfigRaw
	I1206 19:55:22.621844  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetIP
	I1206 19:55:22.624194  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.624565  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:22.624599  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.624876  115497 profile.go:148] Saving config to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/default-k8s-diff-port-380424/config.json ...
	I1206 19:55:22.625062  115497 machine.go:88] provisioning docker machine ...
	I1206 19:55:22.625081  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .DriverName
	I1206 19:55:22.625310  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetMachineName
	I1206 19:55:22.625481  115497 buildroot.go:166] provisioning hostname "default-k8s-diff-port-380424"
	I1206 19:55:22.625502  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetMachineName
	I1206 19:55:22.625635  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHHostname
	I1206 19:55:22.627886  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.628227  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:22.628255  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.628352  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHPort
	I1206 19:55:22.628499  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 19:55:22.628658  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 19:55:22.628784  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHUsername
	I1206 19:55:22.628940  115497 main.go:141] libmachine: Using SSH client type: native
	I1206 19:55:22.629440  115497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.22 22 <nil> <nil>}
	I1206 19:55:22.629462  115497 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-380424 && echo "default-k8s-diff-port-380424" | sudo tee /etc/hostname
	I1206 19:55:22.753829  115497 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-380424
	
	I1206 19:55:22.753867  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHHostname
	I1206 19:55:22.756620  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.756958  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:22.757001  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.757129  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHPort
	I1206 19:55:22.757375  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 19:55:22.757558  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 19:55:22.757700  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHUsername
	I1206 19:55:22.757868  115497 main.go:141] libmachine: Using SSH client type: native
	I1206 19:55:22.758197  115497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.22 22 <nil> <nil>}
	I1206 19:55:22.758252  115497 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-380424' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-380424/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-380424' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 19:55:22.878138  115497 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1206 19:55:22.878175  115497 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17740-63652/.minikube CaCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17740-63652/.minikube}
	I1206 19:55:22.878202  115497 buildroot.go:174] setting up certificates
	I1206 19:55:22.878246  115497 provision.go:83] configureAuth start
	I1206 19:55:22.878259  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetMachineName
	I1206 19:55:22.878557  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetIP
	I1206 19:55:22.881145  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.881515  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:22.881547  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.881657  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHHostname
	I1206 19:55:22.883591  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.883943  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:22.883981  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.884062  115497 provision.go:138] copyHostCerts
	I1206 19:55:22.884122  115497 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem, removing ...
	I1206 19:55:22.884135  115497 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem
	I1206 19:55:22.884203  115497 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem (1082 bytes)
	I1206 19:55:22.884334  115497 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem, removing ...
	I1206 19:55:22.884346  115497 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem
	I1206 19:55:22.884375  115497 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem (1123 bytes)
	I1206 19:55:22.884446  115497 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem, removing ...
	I1206 19:55:22.884457  115497 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem
	I1206 19:55:22.884484  115497 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem (1679 bytes)
	I1206 19:55:22.884539  115497 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-380424 san=[192.168.72.22 192.168.72.22 localhost 127.0.0.1 minikube default-k8s-diff-port-380424]
	I1206 19:55:22.973559  115497 provision.go:172] copyRemoteCerts
	I1206 19:55:22.973627  115497 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 19:55:22.973660  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHHostname
	I1206 19:55:22.976374  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.976656  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:22.976695  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.976888  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHPort
	I1206 19:55:22.977068  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 19:55:22.977300  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHUsername
	I1206 19:55:22.977468  115497 sshutil.go:53] new ssh client: &{IP:192.168.72.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/default-k8s-diff-port-380424/id_rsa Username:docker}
	I1206 19:55:23.061925  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 19:55:23.085093  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1206 19:55:23.108283  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 19:55:23.131666  115497 provision.go:86] duration metric: configureAuth took 253.404471ms
	I1206 19:55:23.131701  115497 buildroot.go:189] setting minikube options for container-runtime
	I1206 19:55:23.131879  115497 config.go:182] Loaded profile config "default-k8s-diff-port-380424": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 19:55:23.131957  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHHostname
	I1206 19:55:23.134672  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:23.135033  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:23.135077  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:23.135214  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHPort
	I1206 19:55:23.135436  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 19:55:23.135622  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 19:55:23.135781  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHUsername
	I1206 19:55:23.135941  115497 main.go:141] libmachine: Using SSH client type: native
	I1206 19:55:23.136393  115497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.22 22 <nil> <nil>}
	I1206 19:55:23.136427  115497 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 19:55:23.445361  115497 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 19:55:23.445389  115497 machine.go:91] provisioned docker machine in 820.312346ms
	I1206 19:55:23.445404  115497 start.go:300] post-start starting for "default-k8s-diff-port-380424" (driver="kvm2")
	I1206 19:55:23.445418  115497 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 19:55:23.445457  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .DriverName
	I1206 19:55:23.445851  115497 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 19:55:23.445886  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHHostname
	I1206 19:55:23.448493  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:23.448851  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:23.448879  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:23.449021  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHPort
	I1206 19:55:23.449210  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 19:55:23.449408  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHUsername
	I1206 19:55:23.449562  115497 sshutil.go:53] new ssh client: &{IP:192.168.72.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/default-k8s-diff-port-380424/id_rsa Username:docker}
	I1206 19:55:23.535493  115497 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 19:55:23.539696  115497 info.go:137] Remote host: Buildroot 2021.02.12
	I1206 19:55:23.539718  115497 filesync.go:126] Scanning /home/jenkins/minikube-integration/17740-63652/.minikube/addons for local assets ...
	I1206 19:55:23.539780  115497 filesync.go:126] Scanning /home/jenkins/minikube-integration/17740-63652/.minikube/files for local assets ...
	I1206 19:55:23.539862  115497 filesync.go:149] local asset: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem -> 708342.pem in /etc/ssl/certs
	I1206 19:55:23.539968  115497 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 19:55:23.548629  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem --> /etc/ssl/certs/708342.pem (1708 bytes)
	I1206 19:55:23.572264  115497 start.go:303] post-start completed in 126.842848ms
	I1206 19:55:23.572287  115497 fix.go:56] fixHost completed within 19.221864403s
	I1206 19:55:23.572321  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHHostname
	I1206 19:55:23.575329  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:23.575695  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:23.575739  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:23.575890  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHPort
	I1206 19:55:23.576093  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 19:55:23.576272  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 19:55:23.576429  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHUsername
	I1206 19:55:23.576599  115497 main.go:141] libmachine: Using SSH client type: native
	I1206 19:55:23.577046  115497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.22 22 <nil> <nil>}
	I1206 19:55:23.577064  115497 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1206 19:55:23.690035  115497 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701892523.637580982
	
	I1206 19:55:23.690064  115497 fix.go:206] guest clock: 1701892523.637580982
	I1206 19:55:23.690084  115497 fix.go:219] Guest: 2023-12-06 19:55:23.637580982 +0000 UTC Remote: 2023-12-06 19:55:23.572291664 +0000 UTC m=+277.181979500 (delta=65.289318ms)
	I1206 19:55:23.690146  115497 fix.go:190] guest clock delta is within tolerance: 65.289318ms
	I1206 19:55:23.690159  115497 start.go:83] releasing machines lock for "default-k8s-diff-port-380424", held for 19.339778523s
	I1206 19:55:23.690192  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .DriverName
	I1206 19:55:23.690465  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetIP
	I1206 19:55:23.692996  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:23.693337  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:23.693369  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:23.693562  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .DriverName
	I1206 19:55:23.694057  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .DriverName
	I1206 19:55:23.694250  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .DriverName
	I1206 19:55:23.694336  115497 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 19:55:23.694390  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHHostname
	I1206 19:55:23.694463  115497 ssh_runner.go:195] Run: cat /version.json
	I1206 19:55:23.694486  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHHostname
	I1206 19:55:23.696938  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:23.697063  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:23.697363  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:23.697393  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:23.697473  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:23.697514  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHPort
	I1206 19:55:23.697593  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:23.697674  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 19:55:23.697675  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHPort
	I1206 19:55:23.697876  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHUsername
	I1206 19:55:23.697899  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 19:55:23.698044  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHUsername
	I1206 19:55:23.698038  115497 sshutil.go:53] new ssh client: &{IP:192.168.72.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/default-k8s-diff-port-380424/id_rsa Username:docker}
	I1206 19:55:23.698167  115497 sshutil.go:53] new ssh client: &{IP:192.168.72.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/default-k8s-diff-port-380424/id_rsa Username:docker}
	I1206 19:55:23.786973  115497 ssh_runner.go:195] Run: systemctl --version
	I1206 19:55:23.814262  115497 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 19:55:23.954235  115497 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 19:55:23.961434  115497 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 19:55:23.961523  115497 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 19:55:23.981459  115497 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1206 19:55:23.981488  115497 start.go:475] detecting cgroup driver to use...
	I1206 19:55:23.981550  115497 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 19:55:24.000294  115497 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 19:55:24.013738  115497 docker.go:203] disabling cri-docker service (if available) ...
	I1206 19:55:24.013799  115497 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 19:55:24.030844  115497 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 19:55:24.044583  115497 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 19:55:24.161979  115497 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 19:55:24.296507  115497 docker.go:219] disabling docker service ...
	I1206 19:55:24.296580  115497 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 19:55:24.311171  115497 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 19:55:24.323538  115497 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 19:55:24.440425  115497 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 19:55:24.570168  115497 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 19:55:24.583169  115497 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 19:55:24.600733  115497 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1206 19:55:24.600790  115497 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:55:24.610057  115497 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 19:55:24.610129  115497 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:55:24.621925  115497 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:55:24.631383  115497 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:55:24.640414  115497 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 19:55:24.649853  115497 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 19:55:24.657999  115497 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1206 19:55:24.658052  115497 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1206 19:55:24.672821  115497 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 19:55:24.681200  115497 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 19:55:24.812790  115497 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 19:55:24.989383  115497 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 19:55:24.989483  115497 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 19:55:24.995335  115497 start.go:543] Will wait 60s for crictl version
	I1206 19:55:24.995404  115497 ssh_runner.go:195] Run: which crictl
	I1206 19:55:24.999307  115497 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1206 19:55:25.038932  115497 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1206 19:55:25.039046  115497 ssh_runner.go:195] Run: crio --version
	I1206 19:55:25.085844  115497 ssh_runner.go:195] Run: crio --version
	I1206 19:55:25.148264  115497 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1206 19:55:25.149676  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetIP
	I1206 19:55:25.152759  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:25.153157  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:25.153201  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:25.153451  115497 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1206 19:55:25.157621  115497 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 19:55:25.173609  115497 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1206 19:55:25.173680  115497 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 19:55:25.223564  115497 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1206 19:55:25.223647  115497 ssh_runner.go:195] Run: which lz4
	I1206 19:55:25.228720  115497 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1206 19:55:25.234028  115497 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1206 19:55:25.234061  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1206 19:55:23.280317  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:23.280398  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:23.291959  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:23.780005  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:23.780086  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:23.794371  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:24.257148  115217 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1206 19:55:24.257182  115217 kubeadm.go:1135] stopping kube-system containers ...
	I1206 19:55:24.257196  115217 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1206 19:55:24.257291  115217 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 19:55:24.300759  115217 cri.go:89] found id: ""
	I1206 19:55:24.300832  115217 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1206 19:55:24.319509  115217 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 19:55:24.329215  115217 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 19:55:24.329310  115217 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 19:55:24.338150  115217 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1206 19:55:24.338187  115217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:55:24.490031  115217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:55:25.123737  115217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:55:25.359750  115217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:55:25.550542  115217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:55:25.697003  115217 api_server.go:52] waiting for apiserver process to appear ...
	I1206 19:55:25.697091  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:55:25.713836  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:55:26.231509  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:55:26.730965  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:55:27.231602  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:55:27.731612  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:55:27.763155  115217 api_server.go:72] duration metric: took 2.066152846s to wait for apiserver process to appear ...
	I1206 19:55:27.763181  115217 api_server.go:88] waiting for apiserver healthz status ...
	I1206 19:55:27.763200  115217 api_server.go:253] Checking apiserver healthz at https://192.168.61.33:8443/healthz ...
	I1206 19:55:25.055509  115591 main.go:141] libmachine: (embed-certs-209025) Waiting to get IP...
	I1206 19:55:25.056687  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:25.057138  115591 main.go:141] libmachine: (embed-certs-209025) DBG | unable to find current IP address of domain embed-certs-209025 in network mk-embed-certs-209025
	I1206 19:55:25.057192  115591 main.go:141] libmachine: (embed-certs-209025) DBG | I1206 19:55:25.057100  116938 retry.go:31] will retry after 304.168381ms: waiting for machine to come up
	I1206 19:55:25.363765  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:25.364265  115591 main.go:141] libmachine: (embed-certs-209025) DBG | unable to find current IP address of domain embed-certs-209025 in network mk-embed-certs-209025
	I1206 19:55:25.364404  115591 main.go:141] libmachine: (embed-certs-209025) DBG | I1206 19:55:25.364341  116938 retry.go:31] will retry after 351.729741ms: waiting for machine to come up
	I1206 19:55:25.718184  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:25.718746  115591 main.go:141] libmachine: (embed-certs-209025) DBG | unable to find current IP address of domain embed-certs-209025 in network mk-embed-certs-209025
	I1206 19:55:25.718774  115591 main.go:141] libmachine: (embed-certs-209025) DBG | I1206 19:55:25.718650  116938 retry.go:31] will retry after 340.321802ms: waiting for machine to come up
	I1206 19:55:26.060168  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:26.060796  115591 main.go:141] libmachine: (embed-certs-209025) DBG | unable to find current IP address of domain embed-certs-209025 in network mk-embed-certs-209025
	I1206 19:55:26.060843  115591 main.go:141] libmachine: (embed-certs-209025) DBG | I1206 19:55:26.060725  116938 retry.go:31] will retry after 422.434651ms: waiting for machine to come up
	I1206 19:55:26.484420  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:26.484967  115591 main.go:141] libmachine: (embed-certs-209025) DBG | unable to find current IP address of domain embed-certs-209025 in network mk-embed-certs-209025
	I1206 19:55:26.485003  115591 main.go:141] libmachine: (embed-certs-209025) DBG | I1206 19:55:26.484931  116938 retry.go:31] will retry after 584.854153ms: waiting for machine to come up
	I1206 19:55:27.071766  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:27.072298  115591 main.go:141] libmachine: (embed-certs-209025) DBG | unable to find current IP address of domain embed-certs-209025 in network mk-embed-certs-209025
	I1206 19:55:27.072325  115591 main.go:141] libmachine: (embed-certs-209025) DBG | I1206 19:55:27.072233  116938 retry.go:31] will retry after 710.482528ms: waiting for machine to come up
	I1206 19:55:27.784162  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:27.784660  115591 main.go:141] libmachine: (embed-certs-209025) DBG | unable to find current IP address of domain embed-certs-209025 in network mk-embed-certs-209025
	I1206 19:55:27.784695  115591 main.go:141] libmachine: (embed-certs-209025) DBG | I1206 19:55:27.784560  116938 retry.go:31] will retry after 754.279817ms: waiting for machine to come up
	I1206 19:55:28.540261  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:28.540788  115591 main.go:141] libmachine: (embed-certs-209025) DBG | unable to find current IP address of domain embed-certs-209025 in network mk-embed-certs-209025
	I1206 19:55:28.540818  115591 main.go:141] libmachine: (embed-certs-209025) DBG | I1206 19:55:28.540728  116938 retry.go:31] will retry after 1.359726156s: waiting for machine to come up
	I1206 19:55:27.194696  115497 crio.go:444] Took 1.966010 seconds to copy over tarball
	I1206 19:55:27.194774  115497 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1206 19:55:30.501183  115497 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.306375512s)
	I1206 19:55:30.501222  115497 crio.go:451] Took 3.306493 seconds to extract the tarball
	I1206 19:55:30.501249  115497 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1206 19:55:30.542574  115497 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 19:55:30.587381  115497 crio.go:496] all images are preloaded for cri-o runtime.
	I1206 19:55:30.587405  115497 cache_images.go:84] Images are preloaded, skipping loading
	I1206 19:55:30.587483  115497 ssh_runner.go:195] Run: crio config
	I1206 19:55:30.649117  115497 cni.go:84] Creating CNI manager for ""
	I1206 19:55:30.649140  115497 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 19:55:30.649163  115497 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1206 19:55:30.649191  115497 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.22 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-380424 NodeName:default-k8s-diff-port-380424 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.22"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.22 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 19:55:30.649383  115497 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.22
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-380424"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.22
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.22"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 19:55:30.649487  115497 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-380424 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.22
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-380424 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1206 19:55:30.649561  115497 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1206 19:55:30.659186  115497 binaries.go:44] Found k8s binaries, skipping transfer
	I1206 19:55:30.659297  115497 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 19:55:30.668534  115497 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (387 bytes)
	I1206 19:55:30.684815  115497 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 19:55:30.701801  115497 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2112 bytes)
	I1206 19:55:30.721756  115497 ssh_runner.go:195] Run: grep 192.168.72.22	control-plane.minikube.internal$ /etc/hosts
	I1206 19:55:30.726656  115497 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.22	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 19:55:30.738943  115497 certs.go:56] Setting up /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/default-k8s-diff-port-380424 for IP: 192.168.72.22
	I1206 19:55:30.738981  115497 certs.go:190] acquiring lock for shared ca certs: {Name:mkf8fbf7b590617ef4dc6c3a4acb742ae26f89ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 19:55:30.739159  115497 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.key
	I1206 19:55:30.739219  115497 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.key
	I1206 19:55:30.739322  115497 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/default-k8s-diff-port-380424/client.key
	I1206 19:55:30.739426  115497 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/default-k8s-diff-port-380424/apiserver.key.99d663cb
	I1206 19:55:30.739489  115497 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/default-k8s-diff-port-380424/proxy-client.key
	I1206 19:55:30.739629  115497 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834.pem (1338 bytes)
	W1206 19:55:30.739672  115497 certs.go:433] ignoring /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834_empty.pem, impossibly tiny 0 bytes
	I1206 19:55:30.739689  115497 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 19:55:30.739726  115497 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem (1082 bytes)
	I1206 19:55:30.739762  115497 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem (1123 bytes)
	I1206 19:55:30.739801  115497 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem (1679 bytes)
	I1206 19:55:30.739872  115497 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem (1708 bytes)
	I1206 19:55:30.740532  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/default-k8s-diff-port-380424/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1206 19:55:30.766689  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/default-k8s-diff-port-380424/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 19:55:30.792892  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/default-k8s-diff-port-380424/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 19:55:30.817640  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/default-k8s-diff-port-380424/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 19:55:30.842916  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 19:55:30.868057  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 19:55:30.893993  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 19:55:30.924631  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 19:55:30.953503  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem --> /usr/share/ca-certificates/708342.pem (1708 bytes)
	I1206 19:55:30.980162  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 19:55:31.007247  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834.pem --> /usr/share/ca-certificates/70834.pem (1338 bytes)
	I1206 19:55:31.034274  115497 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 19:55:31.054544  115497 ssh_runner.go:195] Run: openssl version
	I1206 19:55:31.062053  115497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1206 19:55:31.077159  115497 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:55:31.083640  115497 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  6 18:41 /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:55:31.083707  115497 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:55:31.091093  115497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1206 19:55:31.105305  115497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/70834.pem && ln -fs /usr/share/ca-certificates/70834.pem /etc/ssl/certs/70834.pem"
	I1206 19:55:31.117767  115497 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/70834.pem
	I1206 19:55:31.123703  115497 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  6 18:50 /usr/share/ca-certificates/70834.pem
	I1206 19:55:31.123798  115497 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/70834.pem
	I1206 19:55:31.131531  115497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/70834.pem /etc/ssl/certs/51391683.0"
	I1206 19:55:31.142449  115497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/708342.pem && ln -fs /usr/share/ca-certificates/708342.pem /etc/ssl/certs/708342.pem"
	I1206 19:55:31.157311  115497 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/708342.pem
	I1206 19:55:31.163707  115497 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  6 18:50 /usr/share/ca-certificates/708342.pem
	I1206 19:55:31.163783  115497 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/708342.pem
	I1206 19:55:31.170831  115497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/708342.pem /etc/ssl/certs/3ec20f2e.0"
	I1206 19:55:31.183300  115497 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1206 19:55:31.188165  115497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 19:55:31.194562  115497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 19:55:31.201769  115497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 19:55:31.209562  115497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 19:55:31.217346  115497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 19:55:31.225522  115497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 19:55:31.233755  115497 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-380424 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-380424 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.22 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extra
Disks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 19:55:31.233889  115497 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 19:55:31.233952  115497 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 19:55:31.278891  115497 cri.go:89] found id: ""
	I1206 19:55:31.278972  115497 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 19:55:31.291971  115497 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1206 19:55:31.291999  115497 kubeadm.go:636] restartCluster start
	I1206 19:55:31.292070  115497 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 19:55:31.304934  115497 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:31.306156  115497 kubeconfig.go:92] found "default-k8s-diff-port-380424" server: "https://192.168.72.22:8444"
	I1206 19:55:31.308710  115497 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 19:55:31.321910  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:31.321976  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:31.339075  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:31.339096  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:31.339143  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:31.354436  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:32.765826  115217 api_server.go:269] stopped: https://192.168.61.33:8443/healthz: Get "https://192.168.61.33:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1206 19:55:32.765895  115217 api_server.go:253] Checking apiserver healthz at https://192.168.61.33:8443/healthz ...
	I1206 19:55:29.902670  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:29.903123  115591 main.go:141] libmachine: (embed-certs-209025) DBG | unable to find current IP address of domain embed-certs-209025 in network mk-embed-certs-209025
	I1206 19:55:29.903152  115591 main.go:141] libmachine: (embed-certs-209025) DBG | I1206 19:55:29.903081  116938 retry.go:31] will retry after 1.188380941s: waiting for machine to come up
	I1206 19:55:31.092707  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:31.093278  115591 main.go:141] libmachine: (embed-certs-209025) DBG | unable to find current IP address of domain embed-certs-209025 in network mk-embed-certs-209025
	I1206 19:55:31.093311  115591 main.go:141] libmachine: (embed-certs-209025) DBG | I1206 19:55:31.093245  116938 retry.go:31] will retry after 1.854046475s: waiting for machine to come up
	I1206 19:55:32.948423  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:32.948866  115591 main.go:141] libmachine: (embed-certs-209025) DBG | unable to find current IP address of domain embed-certs-209025 in network mk-embed-certs-209025
	I1206 19:55:32.948891  115591 main.go:141] libmachine: (embed-certs-209025) DBG | I1206 19:55:32.948827  116938 retry.go:31] will retry after 2.868825903s: waiting for machine to come up
	I1206 19:55:34.066100  115217 api_server.go:279] https://192.168.61.33:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1206 19:55:34.066146  115217 api_server.go:103] status: https://192.168.61.33:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1206 19:55:34.566904  115217 api_server.go:253] Checking apiserver healthz at https://192.168.61.33:8443/healthz ...
	I1206 19:55:34.573643  115217 api_server.go:279] https://192.168.61.33:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1206 19:55:34.573675  115217 api_server.go:103] status: https://192.168.61.33:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1206 19:55:35.066235  115217 api_server.go:253] Checking apiserver healthz at https://192.168.61.33:8443/healthz ...
	I1206 19:55:35.076927  115217 api_server.go:279] https://192.168.61.33:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1206 19:55:35.076966  115217 api_server.go:103] status: https://192.168.61.33:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1206 19:55:35.566361  115217 api_server.go:253] Checking apiserver healthz at https://192.168.61.33:8443/healthz ...
	I1206 19:55:35.574853  115217 api_server.go:279] https://192.168.61.33:8443/healthz returned 200:
	ok
	I1206 19:55:35.585855  115217 api_server.go:141] control plane version: v1.16.0
	I1206 19:55:35.585895  115217 api_server.go:131] duration metric: took 7.822706447s to wait for apiserver health ...
	I1206 19:55:35.585908  115217 cni.go:84] Creating CNI manager for ""
	I1206 19:55:35.585917  115217 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 19:55:35.587984  115217 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1206 19:55:31.855148  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:31.855275  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:31.867628  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:32.355238  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:32.355330  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:32.368154  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:32.854710  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:32.854820  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:32.870926  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:33.355493  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:33.355586  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:33.371984  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:33.854511  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:33.854604  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:33.871260  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:34.354793  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:34.354897  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:34.371333  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:34.855487  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:34.855575  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:34.868348  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:35.354949  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:35.355026  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:35.367357  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:35.854910  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:35.855003  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:35.871382  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:36.354908  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:36.355047  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:36.371112  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:35.589529  115217 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1206 19:55:35.599454  115217 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1206 19:55:35.616803  115217 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 19:55:35.626793  115217 system_pods.go:59] 7 kube-system pods found
	I1206 19:55:35.626829  115217 system_pods.go:61] "coredns-5644d7b6d9-nrtk9" [447f7434-3f97-4e3f-9451-d9a54bff7ba1] Running
	I1206 19:55:35.626837  115217 system_pods.go:61] "etcd-old-k8s-version-448851" [77c1f822-788f-4f28-8f8e-54278d5d9e10] Running
	I1206 19:55:35.626843  115217 system_pods.go:61] "kube-apiserver-old-k8s-version-448851" [d3cf3d55-8862-4f81-ac61-99b202469859] Running
	I1206 19:55:35.626851  115217 system_pods.go:61] "kube-controller-manager-old-k8s-version-448851" [58ffb9bc-b5a3-4c64-a78f-da0011e6c277] Running
	I1206 19:55:35.626869  115217 system_pods.go:61] "kube-proxy-sw4qv" [6c08ab4a-447b-42e9-a617-ac35d66cf4ea] Running
	I1206 19:55:35.626879  115217 system_pods.go:61] "kube-scheduler-old-k8s-version-448851" [378ead75-3fd6-4cfd-a063-f2afc3a1cd12] Running
	I1206 19:55:35.626886  115217 system_pods.go:61] "storage-provisioner" [cce901c3-37d9-4ae2-ab9c-99bb7fda6d23] Running
	I1206 19:55:35.626901  115217 system_pods.go:74] duration metric: took 10.069819ms to wait for pod list to return data ...
	I1206 19:55:35.626910  115217 node_conditions.go:102] verifying NodePressure condition ...
	I1206 19:55:35.632164  115217 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1206 19:55:35.632240  115217 node_conditions.go:123] node cpu capacity is 2
	I1206 19:55:35.632256  115217 node_conditions.go:105] duration metric: took 5.340532ms to run NodePressure ...
	I1206 19:55:35.632282  115217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:55:35.925990  115217 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1206 19:55:35.935849  115217 retry.go:31] will retry after 256.122518ms: kubelet not initialised
	I1206 19:55:36.197872  115217 retry.go:31] will retry after 337.717759ms: kubelet not initialised
	I1206 19:55:36.541368  115217 retry.go:31] will retry after 784.037462ms: kubelet not initialised
	I1206 19:55:37.331284  115217 retry.go:31] will retry after 921.381118ms: kubelet not initialised
	I1206 19:55:35.819131  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:35.819759  115591 main.go:141] libmachine: (embed-certs-209025) DBG | unable to find current IP address of domain embed-certs-209025 in network mk-embed-certs-209025
	I1206 19:55:35.819793  115591 main.go:141] libmachine: (embed-certs-209025) DBG | I1206 19:55:35.819698  116938 retry.go:31] will retry after 2.281000862s: waiting for machine to come up
	I1206 19:55:38.103281  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:38.103807  115591 main.go:141] libmachine: (embed-certs-209025) DBG | unable to find current IP address of domain embed-certs-209025 in network mk-embed-certs-209025
	I1206 19:55:38.103845  115591 main.go:141] libmachine: (embed-certs-209025) DBG | I1206 19:55:38.103736  116938 retry.go:31] will retry after 3.076134377s: waiting for machine to come up
	I1206 19:55:36.855191  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:36.855309  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:36.872110  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:37.354562  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:37.354682  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:37.370156  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:37.854600  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:37.854726  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:37.870621  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:38.355289  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:38.355391  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:38.368595  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:38.855116  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:38.855218  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:38.868455  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:39.354955  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:39.355048  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:39.368875  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:39.854833  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:39.854928  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:39.866765  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:40.354989  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:40.355106  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:40.367526  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:40.854791  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:40.854873  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:40.866579  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:41.322422  115497 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1206 19:55:41.322456  115497 kubeadm.go:1135] stopping kube-system containers ...
	I1206 19:55:41.322472  115497 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1206 19:55:41.322548  115497 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 19:55:41.360234  115497 cri.go:89] found id: ""
	I1206 19:55:41.360301  115497 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1206 19:55:41.376968  115497 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 19:55:41.387639  115497 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 19:55:41.387694  115497 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 19:55:41.397586  115497 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1206 19:55:41.397617  115497 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:55:38.258758  115217 retry.go:31] will retry after 961.817778ms: kubelet not initialised
	I1206 19:55:39.225505  115217 retry.go:31] will retry after 1.751905914s: kubelet not initialised
	I1206 19:55:40.982344  115217 retry.go:31] will retry after 1.649102014s: kubelet not initialised
	I1206 19:55:42.639410  115217 retry.go:31] will retry after 3.317462401s: kubelet not initialised
	I1206 19:55:41.182443  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:41.182893  115591 main.go:141] libmachine: (embed-certs-209025) DBG | unable to find current IP address of domain embed-certs-209025 in network mk-embed-certs-209025
	I1206 19:55:41.182930  115591 main.go:141] libmachine: (embed-certs-209025) DBG | I1206 19:55:41.182837  116938 retry.go:31] will retry after 4.029797575s: waiting for machine to come up
	I1206 19:55:41.519134  115497 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:55:42.404075  115497 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:55:42.613308  115497 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:55:42.707533  115497 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:55:42.796041  115497 api_server.go:52] waiting for apiserver process to appear ...
	I1206 19:55:42.796139  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:55:42.816782  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:55:43.336582  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:55:43.836183  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:55:44.336879  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:55:44.836718  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:55:45.336249  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:55:45.363947  115497 api_server.go:72] duration metric: took 2.567911355s to wait for apiserver process to appear ...
	I1206 19:55:45.363968  115497 api_server.go:88] waiting for apiserver healthz status ...
	I1206 19:55:45.363984  115497 api_server.go:253] Checking apiserver healthz at https://192.168.72.22:8444/healthz ...
	I1206 19:55:46.486502  115078 start.go:369] acquired machines lock for "no-preload-989559" in 57.98684139s
	I1206 19:55:46.486560  115078 start.go:96] Skipping create...Using existing machine configuration
	I1206 19:55:46.486570  115078 fix.go:54] fixHost starting: 
	I1206 19:55:46.487006  115078 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:55:46.487052  115078 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:55:46.506170  115078 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32893
	I1206 19:55:46.506576  115078 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:55:46.507081  115078 main.go:141] libmachine: Using API Version  1
	I1206 19:55:46.507110  115078 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:55:46.507412  115078 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:55:46.507600  115078 main.go:141] libmachine: (no-preload-989559) Calling .DriverName
	I1206 19:55:46.508110  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetState
	I1206 19:55:46.509817  115078 fix.go:102] recreateIfNeeded on no-preload-989559: state=Stopped err=<nil>
	I1206 19:55:46.509843  115078 main.go:141] libmachine: (no-preload-989559) Calling .DriverName
	W1206 19:55:46.509988  115078 fix.go:128] unexpected machine state, will restart: <nil>
	I1206 19:55:46.512103  115078 out.go:177] * Restarting existing kvm2 VM for "no-preload-989559" ...
	I1206 19:55:45.214823  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.215271  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has current primary IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.215293  115591 main.go:141] libmachine: (embed-certs-209025) Found IP for machine: 192.168.50.164
	I1206 19:55:45.215341  115591 main.go:141] libmachine: (embed-certs-209025) Reserving static IP address...
	I1206 19:55:45.215738  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "embed-certs-209025", mac: "52:54:00:4d:27:5b", ip: "192.168.50.164"} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:45.215772  115591 main.go:141] libmachine: (embed-certs-209025) DBG | skip adding static IP to network mk-embed-certs-209025 - found existing host DHCP lease matching {name: "embed-certs-209025", mac: "52:54:00:4d:27:5b", ip: "192.168.50.164"}
	I1206 19:55:45.215787  115591 main.go:141] libmachine: (embed-certs-209025) Reserved static IP address: 192.168.50.164
	I1206 19:55:45.215805  115591 main.go:141] libmachine: (embed-certs-209025) Waiting for SSH to be available...
	I1206 19:55:45.215821  115591 main.go:141] libmachine: (embed-certs-209025) DBG | Getting to WaitForSSH function...
	I1206 19:55:45.217850  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.218192  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:45.218219  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.218370  115591 main.go:141] libmachine: (embed-certs-209025) DBG | Using SSH client type: external
	I1206 19:55:45.218404  115591 main.go:141] libmachine: (embed-certs-209025) DBG | Using SSH private key: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/embed-certs-209025/id_rsa (-rw-------)
	I1206 19:55:45.218438  115591 main.go:141] libmachine: (embed-certs-209025) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.164 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17740-63652/.minikube/machines/embed-certs-209025/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1206 19:55:45.218452  115591 main.go:141] libmachine: (embed-certs-209025) DBG | About to run SSH command:
	I1206 19:55:45.218475  115591 main.go:141] libmachine: (embed-certs-209025) DBG | exit 0
	I1206 19:55:45.309353  115591 main.go:141] libmachine: (embed-certs-209025) DBG | SSH cmd err, output: <nil>: 
	I1206 19:55:45.309758  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetConfigRaw
	I1206 19:55:45.310547  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetIP
	I1206 19:55:45.313019  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.313334  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:45.313369  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.313638  115591 profile.go:148] Saving config to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/embed-certs-209025/config.json ...
	I1206 19:55:45.313844  115591 machine.go:88] provisioning docker machine ...
	I1206 19:55:45.313870  115591 main.go:141] libmachine: (embed-certs-209025) Calling .DriverName
	I1206 19:55:45.314081  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetMachineName
	I1206 19:55:45.314264  115591 buildroot.go:166] provisioning hostname "embed-certs-209025"
	I1206 19:55:45.314298  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetMachineName
	I1206 19:55:45.314509  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHHostname
	I1206 19:55:45.316952  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.317361  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:45.317395  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.317640  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHPort
	I1206 19:55:45.317821  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 19:55:45.317954  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 19:55:45.318079  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHUsername
	I1206 19:55:45.318235  115591 main.go:141] libmachine: Using SSH client type: native
	I1206 19:55:45.318665  115591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.164 22 <nil> <nil>}
	I1206 19:55:45.318683  115591 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-209025 && echo "embed-certs-209025" | sudo tee /etc/hostname
	I1206 19:55:45.459071  115591 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-209025
	
	I1206 19:55:45.459107  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHHostname
	I1206 19:55:45.461953  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.462334  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:45.462362  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.462592  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHPort
	I1206 19:55:45.462814  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 19:55:45.463010  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 19:55:45.463162  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHUsername
	I1206 19:55:45.463353  115591 main.go:141] libmachine: Using SSH client type: native
	I1206 19:55:45.463887  115591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.164 22 <nil> <nil>}
	I1206 19:55:45.463916  115591 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-209025' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-209025/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-209025' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 19:55:45.597186  115591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1206 19:55:45.597220  115591 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17740-63652/.minikube CaCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17740-63652/.minikube}
	I1206 19:55:45.597253  115591 buildroot.go:174] setting up certificates
	I1206 19:55:45.597270  115591 provision.go:83] configureAuth start
	I1206 19:55:45.597288  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetMachineName
	I1206 19:55:45.597658  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetIP
	I1206 19:55:45.600582  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.600954  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:45.600983  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.601138  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHHostname
	I1206 19:55:45.603355  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.603746  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:45.603774  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.603942  115591 provision.go:138] copyHostCerts
	I1206 19:55:45.604012  115591 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem, removing ...
	I1206 19:55:45.604037  115591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem
	I1206 19:55:45.604113  115591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem (1082 bytes)
	I1206 19:55:45.604227  115591 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem, removing ...
	I1206 19:55:45.604243  115591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem
	I1206 19:55:45.604277  115591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem (1123 bytes)
	I1206 19:55:45.604353  115591 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem, removing ...
	I1206 19:55:45.604363  115591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem
	I1206 19:55:45.604390  115591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem (1679 bytes)
	I1206 19:55:45.604454  115591 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem org=jenkins.embed-certs-209025 san=[192.168.50.164 192.168.50.164 localhost 127.0.0.1 minikube embed-certs-209025]
	I1206 19:55:45.706944  115591 provision.go:172] copyRemoteCerts
	I1206 19:55:45.707028  115591 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 19:55:45.707069  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHHostname
	I1206 19:55:45.709985  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.710357  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:45.710398  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.710530  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHPort
	I1206 19:55:45.710738  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 19:55:45.710917  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHUsername
	I1206 19:55:45.711092  115591 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/embed-certs-209025/id_rsa Username:docker}
	I1206 19:55:45.807035  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 19:55:45.831480  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 19:55:45.855902  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1206 19:55:45.882797  115591 provision.go:86] duration metric: configureAuth took 285.508678ms
	I1206 19:55:45.882831  115591 buildroot.go:189] setting minikube options for container-runtime
	I1206 19:55:45.883074  115591 config.go:182] Loaded profile config "embed-certs-209025": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 19:55:45.883156  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHHostname
	I1206 19:55:45.886130  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.886576  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:45.886611  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.886825  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHPort
	I1206 19:55:45.887026  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 19:55:45.887198  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 19:55:45.887348  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHUsername
	I1206 19:55:45.887570  115591 main.go:141] libmachine: Using SSH client type: native
	I1206 19:55:45.887900  115591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.164 22 <nil> <nil>}
	I1206 19:55:45.887926  115591 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 19:55:46.217654  115591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 19:55:46.217732  115591 machine.go:91] provisioned docker machine in 903.869734ms
	I1206 19:55:46.217748  115591 start.go:300] post-start starting for "embed-certs-209025" (driver="kvm2")
	I1206 19:55:46.217762  115591 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 19:55:46.217788  115591 main.go:141] libmachine: (embed-certs-209025) Calling .DriverName
	I1206 19:55:46.218154  115591 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 19:55:46.218190  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHHostname
	I1206 19:55:46.220968  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:46.221345  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:46.221378  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:46.221557  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHPort
	I1206 19:55:46.221781  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 19:55:46.221951  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHUsername
	I1206 19:55:46.222093  115591 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/embed-certs-209025/id_rsa Username:docker}
	I1206 19:55:46.316289  115591 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 19:55:46.321014  115591 info.go:137] Remote host: Buildroot 2021.02.12
	I1206 19:55:46.321046  115591 filesync.go:126] Scanning /home/jenkins/minikube-integration/17740-63652/.minikube/addons for local assets ...
	I1206 19:55:46.321108  115591 filesync.go:126] Scanning /home/jenkins/minikube-integration/17740-63652/.minikube/files for local assets ...
	I1206 19:55:46.321183  115591 filesync.go:149] local asset: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem -> 708342.pem in /etc/ssl/certs
	I1206 19:55:46.321304  115591 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 19:55:46.331967  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem --> /etc/ssl/certs/708342.pem (1708 bytes)
	I1206 19:55:46.358983  115591 start.go:303] post-start completed in 141.214825ms
	I1206 19:55:46.359014  115591 fix.go:56] fixHost completed within 22.668688221s
	I1206 19:55:46.359037  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHHostname
	I1206 19:55:46.361846  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:46.362179  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:46.362212  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:46.362452  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHPort
	I1206 19:55:46.362704  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 19:55:46.362897  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 19:55:46.363073  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHUsername
	I1206 19:55:46.363310  115591 main.go:141] libmachine: Using SSH client type: native
	I1206 19:55:46.363803  115591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.164 22 <nil> <nil>}
	I1206 19:55:46.363823  115591 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1206 19:55:46.486321  115591 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701892546.422221924
	
	I1206 19:55:46.486350  115591 fix.go:206] guest clock: 1701892546.422221924
	I1206 19:55:46.486361  115591 fix.go:219] Guest: 2023-12-06 19:55:46.422221924 +0000 UTC Remote: 2023-12-06 19:55:46.359018 +0000 UTC m=+296.897065855 (delta=63.203924ms)
	I1206 19:55:46.486385  115591 fix.go:190] guest clock delta is within tolerance: 63.203924ms
	I1206 19:55:46.486391  115591 start.go:83] releasing machines lock for "embed-certs-209025", held for 22.796102432s
	I1206 19:55:46.486420  115591 main.go:141] libmachine: (embed-certs-209025) Calling .DriverName
	I1206 19:55:46.486727  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetIP
	I1206 19:55:46.489589  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:46.489890  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:46.489922  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:46.490079  115591 main.go:141] libmachine: (embed-certs-209025) Calling .DriverName
	I1206 19:55:46.490643  115591 main.go:141] libmachine: (embed-certs-209025) Calling .DriverName
	I1206 19:55:46.490836  115591 main.go:141] libmachine: (embed-certs-209025) Calling .DriverName
	I1206 19:55:46.490924  115591 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 19:55:46.490974  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHHostname
	I1206 19:55:46.491257  115591 ssh_runner.go:195] Run: cat /version.json
	I1206 19:55:46.491281  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHHostname
	I1206 19:55:46.494034  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:46.494326  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:46.494379  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:46.494405  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:46.494704  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:46.494704  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHPort
	I1206 19:55:46.494748  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:46.494900  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 19:55:46.494958  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHPort
	I1206 19:55:46.495019  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHUsername
	I1206 19:55:46.495144  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 19:55:46.495137  115591 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/embed-certs-209025/id_rsa Username:docker}
	I1206 19:55:46.495269  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHUsername
	I1206 19:55:46.495397  115591 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/embed-certs-209025/id_rsa Username:docker}
	I1206 19:55:46.587575  115591 ssh_runner.go:195] Run: systemctl --version
	I1206 19:55:46.614901  115591 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 19:55:46.764133  115591 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 19:55:46.771049  115591 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 19:55:46.771133  115591 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 19:55:46.786157  115591 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1206 19:55:46.786187  115591 start.go:475] detecting cgroup driver to use...
	I1206 19:55:46.786262  115591 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 19:55:46.801158  115591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 19:55:46.812881  115591 docker.go:203] disabling cri-docker service (if available) ...
	I1206 19:55:46.812948  115591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 19:55:46.825139  115591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 19:55:46.838071  115591 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 19:55:46.949823  115591 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 19:55:47.080490  115591 docker.go:219] disabling docker service ...
	I1206 19:55:47.080572  115591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 19:55:47.094773  115591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 19:55:47.107963  115591 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 19:55:47.233536  115591 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 19:55:47.360425  115591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 19:55:47.377453  115591 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 19:55:47.395959  115591 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1206 19:55:47.396026  115591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:55:47.406599  115591 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 19:55:47.406696  115591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:55:47.417082  115591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:55:47.427463  115591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:55:47.438246  115591 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 19:55:47.449910  115591 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 19:55:47.459620  115591 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1206 19:55:47.459675  115591 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1206 19:55:47.476230  115591 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 19:55:47.486777  115591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 19:55:47.597395  115591 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 19:55:47.809260  115591 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 19:55:47.809348  115591 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 19:55:47.815968  115591 start.go:543] Will wait 60s for crictl version
	I1206 19:55:47.816035  115591 ssh_runner.go:195] Run: which crictl
	I1206 19:55:47.820214  115591 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1206 19:55:47.869345  115591 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1206 19:55:47.869435  115591 ssh_runner.go:195] Run: crio --version
	I1206 19:55:47.923602  115591 ssh_runner.go:195] Run: crio --version
	I1206 19:55:47.983187  115591 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1206 19:55:45.963265  115217 retry.go:31] will retry after 4.496095904s: kubelet not initialised
	I1206 19:55:47.984954  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetIP
	I1206 19:55:47.988218  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:47.988742  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:47.988775  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:47.989031  115591 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1206 19:55:47.994471  115591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 19:55:48.008964  115591 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1206 19:55:48.009022  115591 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 19:55:48.056234  115591 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1206 19:55:48.056333  115591 ssh_runner.go:195] Run: which lz4
	I1206 19:55:48.061573  115591 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1206 19:55:48.066119  115591 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1206 19:55:48.066156  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1206 19:55:46.513897  115078 main.go:141] libmachine: (no-preload-989559) Calling .Start
	I1206 19:55:46.514072  115078 main.go:141] libmachine: (no-preload-989559) Ensuring networks are active...
	I1206 19:55:46.514830  115078 main.go:141] libmachine: (no-preload-989559) Ensuring network default is active
	I1206 19:55:46.515153  115078 main.go:141] libmachine: (no-preload-989559) Ensuring network mk-no-preload-989559 is active
	I1206 19:55:46.515533  115078 main.go:141] libmachine: (no-preload-989559) Getting domain xml...
	I1206 19:55:46.516251  115078 main.go:141] libmachine: (no-preload-989559) Creating domain...
	I1206 19:55:47.899847  115078 main.go:141] libmachine: (no-preload-989559) Waiting to get IP...
	I1206 19:55:47.900939  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:55:47.901513  115078 main.go:141] libmachine: (no-preload-989559) DBG | unable to find current IP address of domain no-preload-989559 in network mk-no-preload-989559
	I1206 19:55:47.901634  115078 main.go:141] libmachine: (no-preload-989559) DBG | I1206 19:55:47.901487  117094 retry.go:31] will retry after 244.343929ms: waiting for machine to come up
	I1206 19:55:48.148254  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:55:48.148888  115078 main.go:141] libmachine: (no-preload-989559) DBG | unable to find current IP address of domain no-preload-989559 in network mk-no-preload-989559
	I1206 19:55:48.148927  115078 main.go:141] libmachine: (no-preload-989559) DBG | I1206 19:55:48.148835  117094 retry.go:31] will retry after 258.755356ms: waiting for machine to come up
	I1206 19:55:48.409550  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:55:48.410401  115078 main.go:141] libmachine: (no-preload-989559) DBG | unable to find current IP address of domain no-preload-989559 in network mk-no-preload-989559
	I1206 19:55:48.410427  115078 main.go:141] libmachine: (no-preload-989559) DBG | I1206 19:55:48.410308  117094 retry.go:31] will retry after 321.790541ms: waiting for machine to come up
	I1206 19:55:48.734055  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:55:48.734744  115078 main.go:141] libmachine: (no-preload-989559) DBG | unable to find current IP address of domain no-preload-989559 in network mk-no-preload-989559
	I1206 19:55:48.734768  115078 main.go:141] libmachine: (no-preload-989559) DBG | I1206 19:55:48.734646  117094 retry.go:31] will retry after 464.789653ms: waiting for machine to come up
	I1206 19:55:49.201462  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:55:49.202032  115078 main.go:141] libmachine: (no-preload-989559) DBG | unable to find current IP address of domain no-preload-989559 in network mk-no-preload-989559
	I1206 19:55:49.202065  115078 main.go:141] libmachine: (no-preload-989559) DBG | I1206 19:55:49.201985  117094 retry.go:31] will retry after 541.238407ms: waiting for machine to come up
	I1206 19:55:49.744792  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:55:49.745432  115078 main.go:141] libmachine: (no-preload-989559) DBG | unable to find current IP address of domain no-preload-989559 in network mk-no-preload-989559
	I1206 19:55:49.745461  115078 main.go:141] libmachine: (no-preload-989559) DBG | I1206 19:55:49.745338  117094 retry.go:31] will retry after 791.407194ms: waiting for machine to come up
	I1206 19:55:50.538151  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:55:50.538857  115078 main.go:141] libmachine: (no-preload-989559) DBG | unable to find current IP address of domain no-preload-989559 in network mk-no-preload-989559
	I1206 19:55:50.538883  115078 main.go:141] libmachine: (no-preload-989559) DBG | I1206 19:55:50.538741  117094 retry.go:31] will retry after 1.11510814s: waiting for machine to come up
	I1206 19:55:49.730248  115497 api_server.go:279] https://192.168.72.22:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1206 19:55:49.730287  115497 api_server.go:103] status: https://192.168.72.22:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1206 19:55:49.730318  115497 api_server.go:253] Checking apiserver healthz at https://192.168.72.22:8444/healthz ...
	I1206 19:55:49.788747  115497 api_server.go:279] https://192.168.72.22:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1206 19:55:49.788796  115497 api_server.go:103] status: https://192.168.72.22:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1206 19:55:50.289144  115497 api_server.go:253] Checking apiserver healthz at https://192.168.72.22:8444/healthz ...
	I1206 19:55:50.301437  115497 api_server.go:279] https://192.168.72.22:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1206 19:55:50.301479  115497 api_server.go:103] status: https://192.168.72.22:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1206 19:55:50.789018  115497 api_server.go:253] Checking apiserver healthz at https://192.168.72.22:8444/healthz ...
	I1206 19:55:50.800325  115497 api_server.go:279] https://192.168.72.22:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1206 19:55:50.800374  115497 api_server.go:103] status: https://192.168.72.22:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1206 19:55:51.289899  115497 api_server.go:253] Checking apiserver healthz at https://192.168.72.22:8444/healthz ...
	I1206 19:55:51.297638  115497 api_server.go:279] https://192.168.72.22:8444/healthz returned 200:
	ok
	I1206 19:55:51.310738  115497 api_server.go:141] control plane version: v1.28.4
	I1206 19:55:51.310772  115497 api_server.go:131] duration metric: took 5.946796569s to wait for apiserver health ...
	I1206 19:55:51.310784  115497 cni.go:84] Creating CNI manager for ""
	I1206 19:55:51.310793  115497 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 19:55:51.312719  115497 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1206 19:55:51.314431  115497 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1206 19:55:51.335045  115497 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1206 19:55:51.365598  115497 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 19:55:51.381865  115497 system_pods.go:59] 8 kube-system pods found
	I1206 19:55:51.381914  115497 system_pods.go:61] "coredns-5dd5756b68-4rgxf" [2ae6daa5-430f-4f14-a40c-c29f4757fb06] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 19:55:51.381936  115497 system_pods.go:61] "etcd-default-k8s-diff-port-380424" [895b0cdf-86c9-4b14-a633-4b172471cd2c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1206 19:55:51.381947  115497 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-380424" [ccc042d4-cd4c-4769-adc6-99d792146d72] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1206 19:55:51.381963  115497 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-380424" [b3fbba6f-fa71-489e-81b0-0196bb019273] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 19:55:51.381972  115497 system_pods.go:61] "kube-proxy-9ftnp" [4389fff8-1b22-47a5-af97-35a4e5b6c2b3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1206 19:55:51.381981  115497 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-380424" [b53c666c-cc84-4dd3-b208-35d04113381c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1206 19:55:51.381997  115497 system_pods.go:61] "metrics-server-57f55c9bc5-7bblg" [3a6477d9-cb91-48cb-ba03-8b669db53841] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 19:55:51.382006  115497 system_pods.go:61] "storage-provisioner" [b8f06027-e37c-4c09-b361-4d70af65c991] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 19:55:51.382020  115497 system_pods.go:74] duration metric: took 16.393796ms to wait for pod list to return data ...
	I1206 19:55:51.382041  115497 node_conditions.go:102] verifying NodePressure condition ...
	I1206 19:55:51.389181  115497 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1206 19:55:51.389242  115497 node_conditions.go:123] node cpu capacity is 2
	I1206 19:55:51.389256  115497 node_conditions.go:105] duration metric: took 7.208817ms to run NodePressure ...
	I1206 19:55:51.389285  115497 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:55:50.466490  115217 retry.go:31] will retry after 11.434043258s: kubelet not initialised
	I1206 19:55:49.900059  115591 crio.go:444] Took 1.838540 seconds to copy over tarball
	I1206 19:55:49.900171  115591 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1206 19:55:53.471724  115591 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.571512743s)
	I1206 19:55:53.471757  115591 crio.go:451] Took 3.571659 seconds to extract the tarball
	I1206 19:55:53.471770  115591 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1206 19:55:53.522151  115591 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 19:55:53.578068  115591 crio.go:496] all images are preloaded for cri-o runtime.
	I1206 19:55:53.578167  115591 cache_images.go:84] Images are preloaded, skipping loading
	I1206 19:55:53.578285  115591 ssh_runner.go:195] Run: crio config
	I1206 19:55:53.650688  115591 cni.go:84] Creating CNI manager for ""
	I1206 19:55:53.650715  115591 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 19:55:53.650736  115591 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1206 19:55:53.650762  115591 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.164 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-209025 NodeName:embed-certs-209025 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.164"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.164 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 19:55:53.650938  115591 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.164
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-209025"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.164
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.164"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 19:55:53.651025  115591 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-209025 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.164
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-209025 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1206 19:55:53.651093  115591 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1206 19:55:53.663792  115591 binaries.go:44] Found k8s binaries, skipping transfer
	I1206 19:55:53.663869  115591 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 19:55:53.674126  115591 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1206 19:55:53.692175  115591 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 19:55:53.708842  115591 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1206 19:55:53.726141  115591 ssh_runner.go:195] Run: grep 192.168.50.164	control-plane.minikube.internal$ /etc/hosts
	I1206 19:55:53.730310  115591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.164	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 19:55:53.742456  115591 certs.go:56] Setting up /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/embed-certs-209025 for IP: 192.168.50.164
	I1206 19:55:53.742489  115591 certs.go:190] acquiring lock for shared ca certs: {Name:mkf8fbf7b590617ef4dc6c3a4acb742ae26f89ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 19:55:53.742712  115591 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.key
	I1206 19:55:53.742765  115591 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.key
	I1206 19:55:53.742841  115591 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/embed-certs-209025/client.key
	I1206 19:55:53.742898  115591 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/embed-certs-209025/apiserver.key.d84b90a2
	I1206 19:55:53.742941  115591 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/embed-certs-209025/proxy-client.key
	I1206 19:55:53.743053  115591 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834.pem (1338 bytes)
	W1206 19:55:53.743081  115591 certs.go:433] ignoring /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834_empty.pem, impossibly tiny 0 bytes
	I1206 19:55:53.743096  115591 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 19:55:53.743135  115591 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem (1082 bytes)
	I1206 19:55:53.743172  115591 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem (1123 bytes)
	I1206 19:55:53.743205  115591 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem (1679 bytes)
	I1206 19:55:53.743265  115591 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem (1708 bytes)
	I1206 19:55:53.743932  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/embed-certs-209025/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1206 19:55:53.770792  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/embed-certs-209025/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 19:55:53.795080  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/embed-certs-209025/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 19:55:53.820920  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/embed-certs-209025/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 19:55:53.849068  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 19:55:53.875210  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 19:55:53.900201  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 19:55:53.927067  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 19:55:53.952810  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 19:55:53.979374  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834.pem --> /usr/share/ca-certificates/70834.pem (1338 bytes)
	I1206 19:55:54.005013  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem --> /usr/share/ca-certificates/708342.pem (1708 bytes)
	I1206 19:55:54.028072  115591 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 19:55:54.047087  115591 ssh_runner.go:195] Run: openssl version
	I1206 19:55:54.052949  115591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/708342.pem && ln -fs /usr/share/ca-certificates/708342.pem /etc/ssl/certs/708342.pem"
	I1206 19:55:54.064662  115591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/708342.pem
	I1206 19:55:54.069695  115591 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  6 18:50 /usr/share/ca-certificates/708342.pem
	I1206 19:55:54.069767  115591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/708342.pem
	I1206 19:55:54.076520  115591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/708342.pem /etc/ssl/certs/3ec20f2e.0"
	I1206 19:55:54.088312  115591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1206 19:55:54.100303  115591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:55:54.105718  115591 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  6 18:41 /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:55:54.105787  115591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:55:54.111543  115591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1206 19:55:54.124094  115591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/70834.pem && ln -fs /usr/share/ca-certificates/70834.pem /etc/ssl/certs/70834.pem"
	I1206 19:55:54.137418  115591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/70834.pem
	I1206 19:55:54.142536  115591 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  6 18:50 /usr/share/ca-certificates/70834.pem
	I1206 19:55:54.142611  115591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/70834.pem
	I1206 19:55:54.148497  115591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/70834.pem /etc/ssl/certs/51391683.0"
	I1206 19:55:54.160909  115591 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1206 19:55:54.165739  115591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 19:55:54.171884  115591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 19:55:54.179765  115591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 19:55:54.187615  115591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 19:55:54.195156  115591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 19:55:54.203228  115591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 19:55:54.210119  115591 kubeadm.go:404] StartCluster: {Name:embed-certs-209025 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-209025 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.164 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 19:55:54.210251  115591 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 19:55:54.210308  115591 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 19:55:54.258252  115591 cri.go:89] found id: ""
	I1206 19:55:54.258347  115591 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 19:55:54.270699  115591 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1206 19:55:54.270724  115591 kubeadm.go:636] restartCluster start
	I1206 19:55:54.270785  115591 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 19:55:54.281833  115591 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:54.282964  115591 kubeconfig.go:92] found "embed-certs-209025" server: "https://192.168.50.164:8443"
	I1206 19:55:54.285394  115591 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 19:55:54.296437  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:55:54.296545  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:54.309685  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:54.309707  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:55:54.309774  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:54.322265  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:51.655238  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:55:51.655732  115078 main.go:141] libmachine: (no-preload-989559) DBG | unable to find current IP address of domain no-preload-989559 in network mk-no-preload-989559
	I1206 19:55:51.655776  115078 main.go:141] libmachine: (no-preload-989559) DBG | I1206 19:55:51.655642  117094 retry.go:31] will retry after 958.384892ms: waiting for machine to come up
	I1206 19:55:52.616005  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:55:52.616540  115078 main.go:141] libmachine: (no-preload-989559) DBG | unable to find current IP address of domain no-preload-989559 in network mk-no-preload-989559
	I1206 19:55:52.616583  115078 main.go:141] libmachine: (no-preload-989559) DBG | I1206 19:55:52.616471  117094 retry.go:31] will retry after 1.537571193s: waiting for machine to come up
	I1206 19:55:54.155949  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:55:54.156397  115078 main.go:141] libmachine: (no-preload-989559) DBG | unable to find current IP address of domain no-preload-989559 in network mk-no-preload-989559
	I1206 19:55:54.156429  115078 main.go:141] libmachine: (no-preload-989559) DBG | I1206 19:55:54.156344  117094 retry.go:31] will retry after 2.030397746s: waiting for machine to come up
	I1206 19:55:51.771991  115497 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1206 19:55:51.786960  115497 kubeadm.go:787] kubelet initialised
	I1206 19:55:51.787056  115497 kubeadm.go:788] duration metric: took 14.962005ms waiting for restarted kubelet to initialise ...
	I1206 19:55:51.787080  115497 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 19:55:51.799090  115497 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-4rgxf" in "kube-system" namespace to be "Ready" ...
	I1206 19:55:53.845695  115497 pod_ready.go:102] pod "coredns-5dd5756b68-4rgxf" in "kube-system" namespace has status "Ready":"False"
	I1206 19:55:55.850483  115497 pod_ready.go:102] pod "coredns-5dd5756b68-4rgxf" in "kube-system" namespace has status "Ready":"False"
	I1206 19:55:54.823014  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:55:54.823105  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:54.835793  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:55.323393  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:55:55.323491  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:55.337041  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:55.823330  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:55:55.823437  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:55.839489  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:56.323250  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:55:56.323356  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:56.340029  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:56.822585  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:55:56.822700  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:56.835752  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:57.323326  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:55:57.323413  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:57.339916  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:57.823386  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:55:57.823478  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:57.840112  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:58.322441  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:55:58.322557  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:58.335485  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:58.822575  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:55:58.822695  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:58.839495  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:59.323053  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:55:59.323129  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:59.336117  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:56.188549  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:55:56.189073  115078 main.go:141] libmachine: (no-preload-989559) DBG | unable to find current IP address of domain no-preload-989559 in network mk-no-preload-989559
	I1206 19:55:56.189105  115078 main.go:141] libmachine: (no-preload-989559) DBG | I1206 19:55:56.189026  117094 retry.go:31] will retry after 2.455387871s: waiting for machine to come up
	I1206 19:55:58.646361  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:55:58.646772  115078 main.go:141] libmachine: (no-preload-989559) DBG | unable to find current IP address of domain no-preload-989559 in network mk-no-preload-989559
	I1206 19:55:58.646804  115078 main.go:141] libmachine: (no-preload-989559) DBG | I1206 19:55:58.646710  117094 retry.go:31] will retry after 3.286246406s: waiting for machine to come up
	I1206 19:55:57.344443  115497 pod_ready.go:92] pod "coredns-5dd5756b68-4rgxf" in "kube-system" namespace has status "Ready":"True"
	I1206 19:55:57.344478  115497 pod_ready.go:81] duration metric: took 5.545343389s waiting for pod "coredns-5dd5756b68-4rgxf" in "kube-system" namespace to be "Ready" ...
	I1206 19:55:57.344492  115497 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 19:55:59.363596  115497 pod_ready.go:102] pod "etcd-default-k8s-diff-port-380424" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:01.363703  115497 pod_ready.go:102] pod "etcd-default-k8s-diff-port-380424" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:01.907869  115217 retry.go:31] will retry after 21.572905296s: kubelet not initialised
	I1206 19:55:59.823000  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:55:59.823148  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:59.836153  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:00.322534  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:56:00.322617  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:00.340369  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:00.822851  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:56:00.822947  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:00.836512  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:01.323083  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:56:01.323161  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:01.337092  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:01.822623  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:56:01.822761  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:01.836428  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:02.323125  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:56:02.323213  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:02.336617  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:02.823198  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:56:02.823287  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:02.835923  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:03.322426  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:56:03.322527  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:03.336495  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:03.822711  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:56:03.822803  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:03.836624  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:04.297216  115591 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1206 19:56:04.297278  115591 kubeadm.go:1135] stopping kube-system containers ...
	I1206 19:56:04.297295  115591 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1206 19:56:04.297393  115591 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 19:56:04.343930  115591 cri.go:89] found id: ""
	I1206 19:56:04.344015  115591 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1206 19:56:04.364785  115591 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 19:56:04.376251  115591 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 19:56:04.376320  115591 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 19:56:04.387749  115591 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1206 19:56:04.387779  115591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:56:04.511034  115591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:56:01.934204  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:01.934775  115078 main.go:141] libmachine: (no-preload-989559) DBG | unable to find current IP address of domain no-preload-989559 in network mk-no-preload-989559
	I1206 19:56:01.934798  115078 main.go:141] libmachine: (no-preload-989559) DBG | I1206 19:56:01.934724  117094 retry.go:31] will retry after 2.967009815s: waiting for machine to come up
	I1206 19:56:04.903296  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:04.903725  115078 main.go:141] libmachine: (no-preload-989559) DBG | unable to find current IP address of domain no-preload-989559 in network mk-no-preload-989559
	I1206 19:56:04.903747  115078 main.go:141] libmachine: (no-preload-989559) DBG | I1206 19:56:04.903692  117094 retry.go:31] will retry after 4.817836653s: waiting for machine to come up
	I1206 19:56:03.862804  115497 pod_ready.go:102] pod "etcd-default-k8s-diff-port-380424" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:04.373174  115497 pod_ready.go:92] pod "etcd-default-k8s-diff-port-380424" in "kube-system" namespace has status "Ready":"True"
	I1206 19:56:04.373209  115497 pod_ready.go:81] duration metric: took 7.028708302s waiting for pod "etcd-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:04.373222  115497 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:04.383300  115497 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-380424" in "kube-system" namespace has status "Ready":"True"
	I1206 19:56:04.383324  115497 pod_ready.go:81] duration metric: took 10.094356ms waiting for pod "kube-apiserver-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:04.383333  115497 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:04.390225  115497 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-380424" in "kube-system" namespace has status "Ready":"True"
	I1206 19:56:04.390254  115497 pod_ready.go:81] duration metric: took 6.909695ms waiting for pod "kube-controller-manager-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:04.390267  115497 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9ftnp" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:04.396713  115497 pod_ready.go:92] pod "kube-proxy-9ftnp" in "kube-system" namespace has status "Ready":"True"
	I1206 19:56:04.396753  115497 pod_ready.go:81] duration metric: took 6.477432ms waiting for pod "kube-proxy-9ftnp" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:04.396766  115497 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:04.407015  115497 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-380424" in "kube-system" namespace has status "Ready":"True"
	I1206 19:56:04.407042  115497 pod_ready.go:81] duration metric: took 10.266604ms waiting for pod "kube-scheduler-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:04.407056  115497 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:05.819075  115591 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.307992443s)
	I1206 19:56:05.819111  115591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:56:06.024824  115591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:56:06.120865  115591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:56:06.207869  115591 api_server.go:52] waiting for apiserver process to appear ...
	I1206 19:56:06.207959  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:06.221306  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:06.734164  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:07.234302  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:07.734130  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:08.233726  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:08.734073  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:08.762848  115591 api_server.go:72] duration metric: took 2.554978073s to wait for apiserver process to appear ...
	I1206 19:56:08.762881  115591 api_server.go:88] waiting for apiserver healthz status ...
	I1206 19:56:08.762903  115591 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8443/healthz ...
	I1206 19:56:09.723600  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:09.724078  115078 main.go:141] libmachine: (no-preload-989559) Found IP for machine: 192.168.39.5
	I1206 19:56:09.724107  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has current primary IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:09.724114  115078 main.go:141] libmachine: (no-preload-989559) Reserving static IP address...
	I1206 19:56:09.724466  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "no-preload-989559", mac: "52:54:00:1c:4b:ce", ip: "192.168.39.5"} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:09.724509  115078 main.go:141] libmachine: (no-preload-989559) DBG | skip adding static IP to network mk-no-preload-989559 - found existing host DHCP lease matching {name: "no-preload-989559", mac: "52:54:00:1c:4b:ce", ip: "192.168.39.5"}
	I1206 19:56:09.724526  115078 main.go:141] libmachine: (no-preload-989559) Reserved static IP address: 192.168.39.5
	I1206 19:56:09.724536  115078 main.go:141] libmachine: (no-preload-989559) Waiting for SSH to be available...
	I1206 19:56:09.724546  115078 main.go:141] libmachine: (no-preload-989559) DBG | Getting to WaitForSSH function...
	I1206 19:56:09.726831  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:09.727117  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:09.727149  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:09.727248  115078 main.go:141] libmachine: (no-preload-989559) DBG | Using SSH client type: external
	I1206 19:56:09.727277  115078 main.go:141] libmachine: (no-preload-989559) DBG | Using SSH private key: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/no-preload-989559/id_rsa (-rw-------)
	I1206 19:56:09.727306  115078 main.go:141] libmachine: (no-preload-989559) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.5 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17740-63652/.minikube/machines/no-preload-989559/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1206 19:56:09.727317  115078 main.go:141] libmachine: (no-preload-989559) DBG | About to run SSH command:
	I1206 19:56:09.727361  115078 main.go:141] libmachine: (no-preload-989559) DBG | exit 0
	I1206 19:56:09.866040  115078 main.go:141] libmachine: (no-preload-989559) DBG | SSH cmd err, output: <nil>: 
	I1206 19:56:09.866443  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetConfigRaw
	I1206 19:56:09.867193  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetIP
	I1206 19:56:09.869892  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:09.870335  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:09.870374  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:09.870612  115078 profile.go:148] Saving config to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/no-preload-989559/config.json ...
	I1206 19:56:09.870870  115078 machine.go:88] provisioning docker machine ...
	I1206 19:56:09.870895  115078 main.go:141] libmachine: (no-preload-989559) Calling .DriverName
	I1206 19:56:09.871120  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetMachineName
	I1206 19:56:09.871299  115078 buildroot.go:166] provisioning hostname "no-preload-989559"
	I1206 19:56:09.871320  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetMachineName
	I1206 19:56:09.871471  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHHostname
	I1206 19:56:09.874146  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:09.874514  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:09.874554  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:09.874741  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHPort
	I1206 19:56:09.874943  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:09.875114  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:09.875258  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHUsername
	I1206 19:56:09.875412  115078 main.go:141] libmachine: Using SSH client type: native
	I1206 19:56:09.875921  115078 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I1206 19:56:09.875942  115078 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-989559 && echo "no-preload-989559" | sudo tee /etc/hostname
	I1206 19:56:10.017205  115078 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-989559
	
	I1206 19:56:10.017259  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHHostname
	I1206 19:56:10.020397  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:10.020843  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:10.020873  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:10.021040  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHPort
	I1206 19:56:10.021287  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:10.021450  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:10.021578  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHUsername
	I1206 19:56:10.021773  115078 main.go:141] libmachine: Using SSH client type: native
	I1206 19:56:10.022227  115078 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I1206 19:56:10.022255  115078 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-989559' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-989559/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-989559' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 19:56:10.160934  115078 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1206 19:56:10.161020  115078 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17740-63652/.minikube CaCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17740-63652/.minikube}
	I1206 19:56:10.161056  115078 buildroot.go:174] setting up certificates
	I1206 19:56:10.161072  115078 provision.go:83] configureAuth start
	I1206 19:56:10.161086  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetMachineName
	I1206 19:56:10.161464  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetIP
	I1206 19:56:10.164558  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:10.164956  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:10.165007  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:10.165246  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHHostname
	I1206 19:56:10.167911  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:10.168352  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:10.168412  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:10.168529  115078 provision.go:138] copyHostCerts
	I1206 19:56:10.168589  115078 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem, removing ...
	I1206 19:56:10.168612  115078 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem
	I1206 19:56:10.168675  115078 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem (1082 bytes)
	I1206 19:56:10.168796  115078 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem, removing ...
	I1206 19:56:10.168811  115078 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem
	I1206 19:56:10.168844  115078 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem (1123 bytes)
	I1206 19:56:10.168923  115078 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem, removing ...
	I1206 19:56:10.168962  115078 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem
	I1206 19:56:10.168990  115078 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem (1679 bytes)
	I1206 19:56:10.169062  115078 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem org=jenkins.no-preload-989559 san=[192.168.39.5 192.168.39.5 localhost 127.0.0.1 minikube no-preload-989559]
	I1206 19:56:10.266595  115078 provision.go:172] copyRemoteCerts
	I1206 19:56:10.266665  115078 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 19:56:10.266693  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHHostname
	I1206 19:56:10.269388  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:10.269786  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:10.269813  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:10.269987  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHPort
	I1206 19:56:10.270226  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:10.270390  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHUsername
	I1206 19:56:10.270536  115078 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/no-preload-989559/id_rsa Username:docker}
	I1206 19:56:10.362922  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 19:56:10.388165  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1206 19:56:10.412473  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 19:56:10.436804  115078 provision.go:86] duration metric: configureAuth took 275.714086ms
	I1206 19:56:10.436840  115078 buildroot.go:189] setting minikube options for container-runtime
	I1206 19:56:10.437076  115078 config.go:182] Loaded profile config "no-preload-989559": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.1
	I1206 19:56:10.437156  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHHostname
	I1206 19:56:10.439999  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:10.440419  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:10.440461  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:10.440567  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHPort
	I1206 19:56:10.440813  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:10.441006  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:10.441213  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHUsername
	I1206 19:56:10.441393  115078 main.go:141] libmachine: Using SSH client type: native
	I1206 19:56:10.441827  115078 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I1206 19:56:10.441844  115078 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 19:56:10.766695  115078 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 19:56:10.766726  115078 machine.go:91] provisioned docker machine in 895.840237ms
	I1206 19:56:10.766739  115078 start.go:300] post-start starting for "no-preload-989559" (driver="kvm2")
	I1206 19:56:10.766759  115078 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 19:56:10.766780  115078 main.go:141] libmachine: (no-preload-989559) Calling .DriverName
	I1206 19:56:10.767134  115078 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 19:56:10.767175  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHHostname
	I1206 19:56:10.770309  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:10.770704  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:10.770733  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:10.770881  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHPort
	I1206 19:56:10.771110  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:10.771247  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHUsername
	I1206 19:56:10.771414  115078 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/no-preload-989559/id_rsa Username:docker}
	I1206 19:56:10.869486  115078 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 19:56:10.874406  115078 info.go:137] Remote host: Buildroot 2021.02.12
	I1206 19:56:10.874433  115078 filesync.go:126] Scanning /home/jenkins/minikube-integration/17740-63652/.minikube/addons for local assets ...
	I1206 19:56:10.874502  115078 filesync.go:126] Scanning /home/jenkins/minikube-integration/17740-63652/.minikube/files for local assets ...
	I1206 19:56:10.874584  115078 filesync.go:149] local asset: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem -> 708342.pem in /etc/ssl/certs
	I1206 19:56:10.874684  115078 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 19:56:10.885837  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem --> /etc/ssl/certs/708342.pem (1708 bytes)
	I1206 19:56:10.910379  115078 start.go:303] post-start completed in 143.622076ms
	I1206 19:56:10.910406  115078 fix.go:56] fixHost completed within 24.423837205s
	I1206 19:56:10.910430  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHHostname
	I1206 19:56:10.913414  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:10.913887  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:10.913924  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:10.914062  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHPort
	I1206 19:56:10.914276  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:10.914430  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:10.914575  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHUsername
	I1206 19:56:10.914741  115078 main.go:141] libmachine: Using SSH client type: native
	I1206 19:56:10.915078  115078 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I1206 19:56:10.915096  115078 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1206 19:56:06.672320  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:09.170277  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:11.173418  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:11.046393  115078 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701892571.030057611
	
	I1206 19:56:11.046418  115078 fix.go:206] guest clock: 1701892571.030057611
	I1206 19:56:11.046427  115078 fix.go:219] Guest: 2023-12-06 19:56:11.030057611 +0000 UTC Remote: 2023-12-06 19:56:10.910410702 +0000 UTC m=+364.968252500 (delta=119.646909ms)
	I1206 19:56:11.046452  115078 fix.go:190] guest clock delta is within tolerance: 119.646909ms
	I1206 19:56:11.046460  115078 start.go:83] releasing machines lock for "no-preload-989559", held for 24.559924375s
	I1206 19:56:11.046485  115078 main.go:141] libmachine: (no-preload-989559) Calling .DriverName
	I1206 19:56:11.046751  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetIP
	I1206 19:56:11.049522  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:11.049918  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:11.049958  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:11.050160  115078 main.go:141] libmachine: (no-preload-989559) Calling .DriverName
	I1206 19:56:11.050715  115078 main.go:141] libmachine: (no-preload-989559) Calling .DriverName
	I1206 19:56:11.050932  115078 main.go:141] libmachine: (no-preload-989559) Calling .DriverName
	I1206 19:56:11.051010  115078 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 19:56:11.051063  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHHostname
	I1206 19:56:11.051201  115078 ssh_runner.go:195] Run: cat /version.json
	I1206 19:56:11.051234  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHHostname
	I1206 19:56:11.054142  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:11.054342  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:11.054556  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:11.054587  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:11.054711  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHPort
	I1206 19:56:11.054925  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:11.054930  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:11.054950  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:11.055054  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHPort
	I1206 19:56:11.055147  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHUsername
	I1206 19:56:11.055316  115078 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/no-preload-989559/id_rsa Username:docker}
	I1206 19:56:11.055338  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:11.055483  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHUsername
	I1206 19:56:11.055605  115078 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/no-preload-989559/id_rsa Username:docker}
	I1206 19:56:11.180256  115078 ssh_runner.go:195] Run: systemctl --version
	I1206 19:56:11.186702  115078 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 19:56:11.339386  115078 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 19:56:11.346262  115078 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 19:56:11.346364  115078 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 19:56:11.362865  115078 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1206 19:56:11.362902  115078 start.go:475] detecting cgroup driver to use...
	I1206 19:56:11.362988  115078 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 19:56:11.383636  115078 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 19:56:11.397157  115078 docker.go:203] disabling cri-docker service (if available) ...
	I1206 19:56:11.397264  115078 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 19:56:11.411338  115078 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 19:56:11.425762  115078 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 19:56:11.560730  115078 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 19:56:11.708633  115078 docker.go:219] disabling docker service ...
	I1206 19:56:11.708713  115078 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 19:56:11.723172  115078 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 19:56:11.737032  115078 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 19:56:11.851037  115078 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 19:56:11.969321  115078 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 19:56:11.982745  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 19:56:12.003130  115078 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1206 19:56:12.003215  115078 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:56:12.013345  115078 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 19:56:12.013428  115078 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:56:12.023765  115078 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:56:12.034114  115078 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:56:12.044159  115078 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 19:56:12.054135  115078 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 19:56:12.062781  115078 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1206 19:56:12.062861  115078 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1206 19:56:12.076322  115078 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 19:56:12.085924  115078 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 19:56:12.216360  115078 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 19:56:12.409482  115078 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 19:56:12.409550  115078 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 19:56:12.417063  115078 start.go:543] Will wait 60s for crictl version
	I1206 19:56:12.417135  115078 ssh_runner.go:195] Run: which crictl
	I1206 19:56:12.422177  115078 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1206 19:56:12.474340  115078 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1206 19:56:12.474449  115078 ssh_runner.go:195] Run: crio --version
	I1206 19:56:12.538091  115078 ssh_runner.go:195] Run: crio --version
	I1206 19:56:12.604444  115078 out.go:177] * Preparing Kubernetes v1.29.0-rc.1 on CRI-O 1.24.1 ...
	I1206 19:56:12.144887  115591 api_server.go:279] https://192.168.50.164:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1206 19:56:12.144921  115591 api_server.go:103] status: https://192.168.50.164:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1206 19:56:12.144936  115591 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8443/healthz ...
	I1206 19:56:12.179318  115591 api_server.go:279] https://192.168.50.164:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1206 19:56:12.179366  115591 api_server.go:103] status: https://192.168.50.164:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1206 19:56:12.679803  115591 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8443/healthz ...
	I1206 19:56:12.694412  115591 api_server.go:279] https://192.168.50.164:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1206 19:56:12.694449  115591 api_server.go:103] status: https://192.168.50.164:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1206 19:56:13.179503  115591 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8443/healthz ...
	I1206 19:56:13.193118  115591 api_server.go:279] https://192.168.50.164:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1206 19:56:13.193161  115591 api_server.go:103] status: https://192.168.50.164:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1206 19:56:13.679759  115591 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8443/healthz ...
	I1206 19:56:13.685603  115591 api_server.go:279] https://192.168.50.164:8443/healthz returned 200:
	ok
	I1206 19:56:13.694792  115591 api_server.go:141] control plane version: v1.28.4
	I1206 19:56:13.694831  115591 api_server.go:131] duration metric: took 4.931941572s to wait for apiserver health ...
	I1206 19:56:13.694843  115591 cni.go:84] Creating CNI manager for ""
	I1206 19:56:13.694852  115591 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 19:56:13.697042  115591 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1206 19:56:13.698653  115591 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1206 19:56:13.712991  115591 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1206 19:56:13.734001  115591 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 19:56:13.761962  115591 system_pods.go:59] 8 kube-system pods found
	I1206 19:56:13.762001  115591 system_pods.go:61] "coredns-5dd5756b68-cpst4" [e7d8324e-8468-470c-b532-1f09ee805bab] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 19:56:13.762022  115591 system_pods.go:61] "etcd-embed-certs-209025" [eeb81149-8e43-4efe-b977-e8f84c7a7c57] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1206 19:56:13.762032  115591 system_pods.go:61] "kube-apiserver-embed-certs-209025" [b64e228d-4921-4e35-b80c-343f8519076e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1206 19:56:13.762041  115591 system_pods.go:61] "kube-controller-manager-embed-certs-209025" [2206d849-0724-42c9-b5c4-4d84c3cafce4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 19:56:13.762053  115591 system_pods.go:61] "kube-proxy-pt8nj" [b7cffe6a-4401-40e0-8056-68452e15b57c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1206 19:56:13.762068  115591 system_pods.go:61] "kube-scheduler-embed-certs-209025" [88ae7a94-a1bc-463a-9253-5f308ec1755e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1206 19:56:13.762077  115591 system_pods.go:61] "metrics-server-57f55c9bc5-dr9k8" [0dbe18a4-d30d-4882-b188-b0d1f1b1d711] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 19:56:13.762092  115591 system_pods.go:61] "storage-provisioner" [afebf144-9062-4b43-a491-9eecd5ab6c10] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 19:56:13.762109  115591 system_pods.go:74] duration metric: took 28.078588ms to wait for pod list to return data ...
	I1206 19:56:13.762120  115591 node_conditions.go:102] verifying NodePressure condition ...
	I1206 19:56:13.773614  115591 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1206 19:56:13.773646  115591 node_conditions.go:123] node cpu capacity is 2
	I1206 19:56:13.773657  115591 node_conditions.go:105] duration metric: took 11.528993ms to run NodePressure ...
	I1206 19:56:13.773678  115591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:56:14.157761  115591 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1206 19:56:14.169588  115591 kubeadm.go:787] kubelet initialised
	I1206 19:56:14.169632  115591 kubeadm.go:788] duration metric: took 11.756226ms waiting for restarted kubelet to initialise ...
	I1206 19:56:14.169644  115591 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 19:56:14.186031  115591 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-cpst4" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:14.211717  115591 pod_ready.go:97] node "embed-certs-209025" hosting pod "coredns-5dd5756b68-cpst4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-209025" has status "Ready":"False"
	I1206 19:56:14.211747  115591 pod_ready.go:81] duration metric: took 25.681607ms waiting for pod "coredns-5dd5756b68-cpst4" in "kube-system" namespace to be "Ready" ...
	E1206 19:56:14.211759  115591 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-209025" hosting pod "coredns-5dd5756b68-cpst4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-209025" has status "Ready":"False"
	I1206 19:56:14.211769  115591 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:14.219369  115591 pod_ready.go:97] node "embed-certs-209025" hosting pod "etcd-embed-certs-209025" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-209025" has status "Ready":"False"
	I1206 19:56:14.219396  115591 pod_ready.go:81] duration metric: took 7.594898ms waiting for pod "etcd-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	E1206 19:56:14.219408  115591 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-209025" hosting pod "etcd-embed-certs-209025" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-209025" has status "Ready":"False"
	I1206 19:56:14.219425  115591 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:14.233417  115591 pod_ready.go:97] node "embed-certs-209025" hosting pod "kube-apiserver-embed-certs-209025" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-209025" has status "Ready":"False"
	I1206 19:56:14.233513  115591 pod_ready.go:81] duration metric: took 14.073312ms waiting for pod "kube-apiserver-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	E1206 19:56:14.233535  115591 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-209025" hosting pod "kube-apiserver-embed-certs-209025" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-209025" has status "Ready":"False"
	I1206 19:56:14.233546  115591 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:14.244480  115591 pod_ready.go:97] node "embed-certs-209025" hosting pod "kube-controller-manager-embed-certs-209025" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-209025" has status "Ready":"False"
	I1206 19:56:14.244516  115591 pod_ready.go:81] duration metric: took 10.958431ms waiting for pod "kube-controller-manager-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	E1206 19:56:14.244530  115591 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-209025" hosting pod "kube-controller-manager-embed-certs-209025" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-209025" has status "Ready":"False"
	I1206 19:56:14.244537  115591 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-pt8nj" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:12.606102  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetIP
	I1206 19:56:12.609040  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:12.609395  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:12.609436  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:12.609665  115078 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1206 19:56:12.615279  115078 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 19:56:12.629571  115078 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime crio
	I1206 19:56:12.629641  115078 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 19:56:12.674728  115078 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.1". assuming images are not preloaded.
	I1206 19:56:12.674763  115078 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.1 registry.k8s.io/kube-controller-manager:v1.29.0-rc.1 registry.k8s.io/kube-scheduler:v1.29.0-rc.1 registry.k8s.io/kube-proxy:v1.29.0-rc.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1206 19:56:12.674870  115078 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 19:56:12.674886  115078 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1206 19:56:12.674910  115078 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I1206 19:56:12.674923  115078 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1206 19:56:12.674965  115078 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1206 19:56:12.674885  115078 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1206 19:56:12.674998  115078 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I1206 19:56:12.674889  115078 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1206 19:56:12.676510  115078 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 19:56:12.676539  115078 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1206 19:56:12.676563  115078 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1206 19:56:12.676576  115078 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1206 19:56:12.676511  115078 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I1206 19:56:12.676599  115078 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I1206 19:56:12.676624  115078 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1206 19:56:12.676642  115078 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1206 19:56:12.862606  115078 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1206 19:56:12.882993  115078 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I1206 19:56:12.884387  115078 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I1206 19:56:12.900149  115078 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 19:56:12.909389  115078 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1206 19:56:12.916391  115078 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1206 19:56:12.924669  115078 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1206 19:56:12.946885  115078 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1206 19:56:13.028628  115078 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I1206 19:56:13.028685  115078 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I1206 19:56:13.028741  115078 ssh_runner.go:195] Run: which crictl
	I1206 19:56:13.095076  115078 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I1206 19:56:13.095139  115078 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I1206 19:56:13.095289  115078 ssh_runner.go:195] Run: which crictl
	I1206 19:56:13.136956  115078 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.1" does not exist at hash "b953575c86991f5317a5f4b7e3f43c8a43d771ca400d96ba473f43c9a1ab3542" in container runtime
	I1206 19:56:13.137003  115078 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1206 19:56:13.137074  115078 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 19:56:13.137130  115078 ssh_runner.go:195] Run: which crictl
	I1206 19:56:13.137005  115078 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1206 19:56:13.137268  115078 ssh_runner.go:195] Run: which crictl
	I1206 19:56:13.146913  115078 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.1" does not exist at hash "b7918a1eaa37ee8f677a10278009aa059854630785fbb7306140641494aeec09" in container runtime
	I1206 19:56:13.146970  115078 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1206 19:56:13.147024  115078 ssh_runner.go:195] Run: which crictl
	I1206 19:56:13.159866  115078 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.1" does not exist at hash "86e0ea23640eb90027102f4e4ff188211c290910bcd9af7402fd429d43d281ff" in container runtime
	I1206 19:56:13.159913  115078 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1206 19:56:13.159963  115078 ssh_runner.go:195] Run: which crictl
	I1206 19:56:13.162288  115078 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I1206 19:56:13.162330  115078 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.1" does not exist at hash "5c4a644510d6f168869cf0620dc09abd02a6fe054538f92a12c7fd7365ffb956" in container runtime
	I1206 19:56:13.162375  115078 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I1206 19:56:13.162378  115078 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1206 19:56:13.162399  115078 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 19:56:13.162407  115078 ssh_runner.go:195] Run: which crictl
	I1206 19:56:13.162523  115078 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1206 19:56:13.162523  115078 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1206 19:56:13.165637  115078 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1206 19:56:13.319155  115078 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I1206 19:56:13.319253  115078 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1206 19:56:13.319274  115078 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I1206 19:56:13.319300  115078 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1206 19:56:13.319371  115078 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.1
	I1206 19:56:13.319394  115078 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1206 19:56:13.319405  115078 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I1206 19:56:13.319423  115078 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1
	I1206 19:56:13.319472  115078 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I1206 19:56:13.319495  115078 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.1
	I1206 19:56:13.319545  115078 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.1
	I1206 19:56:13.319621  115078 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1
	I1206 19:56:13.319546  115078 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1
	I1206 19:56:13.376009  115078 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1 (exists)
	I1206 19:56:13.376036  115078 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1
	I1206 19:56:13.376100  115078 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1
	I1206 19:56:13.376145  115078 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I1206 19:56:13.376179  115078 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1206 19:56:13.376217  115078 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I1206 19:56:13.376273  115078 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1 (exists)
	I1206 19:56:13.376302  115078 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1 (exists)
	I1206 19:56:13.376329  115078 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.1
	I1206 19:56:13.376423  115078 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1
	I1206 19:56:15.530421  115078 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1: (2.153965348s)
	I1206 19:56:15.530466  115078 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1 (exists)
	I1206 19:56:15.530502  115078 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1: (2.154372843s)
	I1206 19:56:15.530536  115078 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.1 from cache
	I1206 19:56:15.530571  115078 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I1206 19:56:15.530630  115078 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I1206 19:56:13.177508  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:15.671903  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:14.963353  115591 pod_ready.go:92] pod "kube-proxy-pt8nj" in "kube-system" namespace has status "Ready":"True"
	I1206 19:56:14.963382  115591 pod_ready.go:81] duration metric: took 718.835702ms waiting for pod "kube-proxy-pt8nj" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:14.963391  115591 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:17.284373  115591 pod_ready.go:102] pod "kube-scheduler-embed-certs-209025" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:19.354814  115078 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.824152707s)
	I1206 19:56:19.354846  115078 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I1206 19:56:19.354874  115078 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1206 19:56:19.354924  115078 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1206 19:56:20.402300  115078 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.047341059s)
	I1206 19:56:20.402334  115078 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1206 19:56:20.402378  115078 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I1206 19:56:20.402442  115078 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I1206 19:56:17.672489  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:20.171526  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:19.771500  115591 pod_ready.go:102] pod "kube-scheduler-embed-certs-209025" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:22.273627  115591 pod_ready.go:102] pod "kube-scheduler-embed-certs-209025" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:23.269993  115591 pod_ready.go:92] pod "kube-scheduler-embed-certs-209025" in "kube-system" namespace has status "Ready":"True"
	I1206 19:56:23.270019  115591 pod_ready.go:81] duration metric: took 8.306621129s waiting for pod "kube-scheduler-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:23.270029  115591 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:22.575204  115078 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.17273177s)
	I1206 19:56:22.575240  115078 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I1206 19:56:22.575270  115078 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1
	I1206 19:56:22.575318  115078 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1
	I1206 19:56:25.335616  115078 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1: (2.760267154s)
	I1206 19:56:25.335646  115078 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.1 from cache
	I1206 19:56:25.335680  115078 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1
	I1206 19:56:25.335760  115078 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1
	I1206 19:56:22.175410  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:24.677136  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:23.486162  115217 kubeadm.go:787] kubelet initialised
	I1206 19:56:23.486192  115217 kubeadm.go:788] duration metric: took 47.560169603s waiting for restarted kubelet to initialise ...
	I1206 19:56:23.486203  115217 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 19:56:23.491797  115217 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-85xcj" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:23.499126  115217 pod_ready.go:92] pod "coredns-5644d7b6d9-85xcj" in "kube-system" namespace has status "Ready":"True"
	I1206 19:56:23.499149  115217 pod_ready.go:81] duration metric: took 7.327003ms waiting for pod "coredns-5644d7b6d9-85xcj" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:23.499160  115217 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-nrtk9" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:23.503979  115217 pod_ready.go:92] pod "coredns-5644d7b6d9-nrtk9" in "kube-system" namespace has status "Ready":"True"
	I1206 19:56:23.504002  115217 pod_ready.go:81] duration metric: took 4.834039ms waiting for pod "coredns-5644d7b6d9-nrtk9" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:23.504014  115217 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-448851" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:23.509110  115217 pod_ready.go:92] pod "etcd-old-k8s-version-448851" in "kube-system" namespace has status "Ready":"True"
	I1206 19:56:23.509132  115217 pod_ready.go:81] duration metric: took 5.109845ms waiting for pod "etcd-old-k8s-version-448851" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:23.509153  115217 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-448851" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:23.514641  115217 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-448851" in "kube-system" namespace has status "Ready":"True"
	I1206 19:56:23.514665  115217 pod_ready.go:81] duration metric: took 5.502762ms waiting for pod "kube-apiserver-old-k8s-version-448851" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:23.514677  115217 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-448851" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:23.886694  115217 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-448851" in "kube-system" namespace has status "Ready":"True"
	I1206 19:56:23.886726  115217 pod_ready.go:81] duration metric: took 372.040617ms waiting for pod "kube-controller-manager-old-k8s-version-448851" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:23.886741  115217 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-sw4qv" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:24.287638  115217 pod_ready.go:92] pod "kube-proxy-sw4qv" in "kube-system" namespace has status "Ready":"True"
	I1206 19:56:24.287662  115217 pod_ready.go:81] duration metric: took 400.914693ms waiting for pod "kube-proxy-sw4qv" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:24.287673  115217 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-448851" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:24.688298  115217 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-448851" in "kube-system" namespace has status "Ready":"True"
	I1206 19:56:24.688328  115217 pod_ready.go:81] duration metric: took 400.645544ms waiting for pod "kube-scheduler-old-k8s-version-448851" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:24.688343  115217 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:26.991669  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:25.288536  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:27.290135  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:29.291318  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:27.610095  115078 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1: (2.274298339s)
	I1206 19:56:27.610132  115078 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.1 from cache
	I1206 19:56:27.610163  115078 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1
	I1206 19:56:27.610219  115078 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1
	I1206 19:56:30.272712  115078 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1: (2.662458967s)
	I1206 19:56:30.272746  115078 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.1 from cache
	I1206 19:56:30.272782  115078 cache_images.go:123] Successfully loaded all cached images
	I1206 19:56:30.272789  115078 cache_images.go:92] LoadImages completed in 17.598011028s
	I1206 19:56:30.272883  115078 ssh_runner.go:195] Run: crio config
	I1206 19:56:30.341321  115078 cni.go:84] Creating CNI manager for ""
	I1206 19:56:30.341346  115078 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 19:56:30.341368  115078 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1206 19:56:30.341392  115078 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.5 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-989559 NodeName:no-preload-989559 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 19:56:30.341597  115078 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-989559"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 19:56:30.341693  115078 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-989559 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.1 ClusterName:no-preload-989559 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1206 19:56:30.341758  115078 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.1
	I1206 19:56:30.351650  115078 binaries.go:44] Found k8s binaries, skipping transfer
	I1206 19:56:30.351729  115078 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 19:56:30.360413  115078 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1206 19:56:30.376399  115078 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1206 19:56:30.392522  115078 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I1206 19:56:30.409313  115078 ssh_runner.go:195] Run: grep 192.168.39.5	control-plane.minikube.internal$ /etc/hosts
	I1206 19:56:30.413355  115078 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.5	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 19:56:30.426797  115078 certs.go:56] Setting up /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/no-preload-989559 for IP: 192.168.39.5
	I1206 19:56:30.426854  115078 certs.go:190] acquiring lock for shared ca certs: {Name:mkf8fbf7b590617ef4dc6c3a4acb742ae26f89ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 19:56:30.427070  115078 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.key
	I1206 19:56:30.427134  115078 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.key
	I1206 19:56:30.427240  115078 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/no-preload-989559/client.key
	I1206 19:56:30.427311  115078 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/no-preload-989559/apiserver.key.c9b343a5
	I1206 19:56:30.427350  115078 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/no-preload-989559/proxy-client.key
	I1206 19:56:30.427454  115078 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834.pem (1338 bytes)
	W1206 19:56:30.427508  115078 certs.go:433] ignoring /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834_empty.pem, impossibly tiny 0 bytes
	I1206 19:56:30.427521  115078 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 19:56:30.427550  115078 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem (1082 bytes)
	I1206 19:56:30.427571  115078 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem (1123 bytes)
	I1206 19:56:30.427593  115078 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem (1679 bytes)
	I1206 19:56:30.427634  115078 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem (1708 bytes)
	I1206 19:56:30.428313  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/no-preload-989559/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1206 19:56:30.452268  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/no-preload-989559/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 19:56:30.476793  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/no-preload-989559/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 19:56:30.503751  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/no-preload-989559/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 19:56:30.530680  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 19:56:30.557770  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 19:56:30.582244  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 19:56:30.608096  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 19:56:30.634585  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834.pem --> /usr/share/ca-certificates/70834.pem (1338 bytes)
	I1206 19:56:30.660669  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem --> /usr/share/ca-certificates/708342.pem (1708 bytes)
	I1206 19:56:30.686987  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 19:56:30.711098  115078 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 19:56:30.727576  115078 ssh_runner.go:195] Run: openssl version
	I1206 19:56:30.733568  115078 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/708342.pem && ln -fs /usr/share/ca-certificates/708342.pem /etc/ssl/certs/708342.pem"
	I1206 19:56:30.743777  115078 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/708342.pem
	I1206 19:56:30.748976  115078 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  6 18:50 /usr/share/ca-certificates/708342.pem
	I1206 19:56:30.749033  115078 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/708342.pem
	I1206 19:56:30.755465  115078 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/708342.pem /etc/ssl/certs/3ec20f2e.0"
	I1206 19:56:30.766285  115078 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1206 19:56:30.777164  115078 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:56:30.782160  115078 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  6 18:41 /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:56:30.782228  115078 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:56:30.789394  115078 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1206 19:56:30.801293  115078 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/70834.pem && ln -fs /usr/share/ca-certificates/70834.pem /etc/ssl/certs/70834.pem"
	I1206 19:56:30.812646  115078 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/70834.pem
	I1206 19:56:30.818147  115078 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  6 18:50 /usr/share/ca-certificates/70834.pem
	I1206 19:56:30.818209  115078 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/70834.pem
	I1206 19:56:30.824161  115078 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/70834.pem /etc/ssl/certs/51391683.0"
	I1206 19:56:30.834389  115078 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1206 19:56:30.839518  115078 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 19:56:30.845997  115078 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 19:56:30.852229  115078 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 19:56:30.858622  115078 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 19:56:30.864675  115078 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 19:56:30.870945  115078 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 19:56:30.878301  115078 kubeadm.go:404] StartCluster: {Name:no-preload-989559 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.1 ClusterName:no-preload-989559 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.5 Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 19:56:30.878438  115078 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 19:56:30.878513  115078 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 19:56:30.921588  115078 cri.go:89] found id: ""
	I1206 19:56:30.921692  115078 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 19:56:30.932160  115078 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1206 19:56:30.932190  115078 kubeadm.go:636] restartCluster start
	I1206 19:56:30.932264  115078 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 19:56:30.942019  115078 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:30.943237  115078 kubeconfig.go:92] found "no-preload-989559" server: "https://192.168.39.5:8443"
	I1206 19:56:30.945618  115078 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 19:56:30.954582  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:30.954655  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:30.966532  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:30.966555  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:30.966602  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:30.979930  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:27.172625  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:29.671318  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:28.992218  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:30.994420  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:31.786922  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:33.787251  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:31.480021  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:31.480135  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:31.493287  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:31.980317  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:31.980409  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:31.994348  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:32.480929  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:32.481020  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:32.494940  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:32.980449  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:32.980559  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:32.993316  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:33.481040  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:33.481150  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:33.494210  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:33.980837  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:33.980936  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:33.994280  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:34.480389  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:34.480492  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:34.493915  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:34.980458  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:34.980569  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:34.994306  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:35.480788  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:35.480897  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:35.495397  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:35.980815  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:35.980919  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:32.171889  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:34.669989  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:33.491932  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:35.492626  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:37.991389  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:35.787950  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:38.288581  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	W1206 19:56:35.994848  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:36.480833  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:36.480959  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:36.496053  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:36.980074  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:36.980197  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:36.994615  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:37.480110  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:37.480203  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:37.494380  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:37.980922  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:37.981009  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:37.994865  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:38.480432  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:38.480536  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:38.494938  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:38.980148  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:38.980250  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:38.995427  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:39.481067  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:39.481153  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:39.494631  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:39.980142  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:39.980255  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:39.991638  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:40.480132  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:40.480205  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:40.492507  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:40.955413  115078 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1206 19:56:40.955478  115078 kubeadm.go:1135] stopping kube-system containers ...
	I1206 19:56:40.955492  115078 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1206 19:56:40.955574  115078 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 19:56:36.673986  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:39.172561  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:41.177049  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:40.490976  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:42.492210  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:40.293997  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:42.789693  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:40.997724  115078 cri.go:89] found id: ""
	I1206 19:56:40.997783  115078 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1206 19:56:41.013137  115078 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 19:56:41.021612  115078 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 19:56:41.021667  115078 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 19:56:41.030846  115078 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1206 19:56:41.030878  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:56:41.160850  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:56:42.395616  115078 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.234715721s)
	I1206 19:56:42.395651  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:56:42.595187  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:56:42.688245  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:56:42.769464  115078 api_server.go:52] waiting for apiserver process to appear ...
	I1206 19:56:42.769566  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:42.783010  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:43.303551  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:43.803070  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:44.303922  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:44.803326  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:45.302954  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:45.323804  115078 api_server.go:72] duration metric: took 2.55435107s to wait for apiserver process to appear ...
	I1206 19:56:45.323839  115078 api_server.go:88] waiting for apiserver healthz status ...
	I1206 19:56:45.323865  115078 api_server.go:253] Checking apiserver healthz at https://192.168.39.5:8443/healthz ...
	I1206 19:56:45.324588  115078 api_server.go:269] stopped: https://192.168.39.5:8443/healthz: Get "https://192.168.39.5:8443/healthz": dial tcp 192.168.39.5:8443: connect: connection refused
	I1206 19:56:45.324632  115078 api_server.go:253] Checking apiserver healthz at https://192.168.39.5:8443/healthz ...
	I1206 19:56:45.325115  115078 api_server.go:269] stopped: https://192.168.39.5:8443/healthz: Get "https://192.168.39.5:8443/healthz": dial tcp 192.168.39.5:8443: connect: connection refused
	I1206 19:56:45.825883  115078 api_server.go:253] Checking apiserver healthz at https://192.168.39.5:8443/healthz ...
	I1206 19:56:43.670089  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:45.670833  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:44.994670  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:47.492548  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:45.288109  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:47.788636  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:49.759033  115078 api_server.go:279] https://192.168.39.5:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1206 19:56:49.759089  115078 api_server.go:103] status: https://192.168.39.5:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1206 19:56:49.759117  115078 api_server.go:253] Checking apiserver healthz at https://192.168.39.5:8443/healthz ...
	I1206 19:56:49.778467  115078 api_server.go:279] https://192.168.39.5:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1206 19:56:49.778502  115078 api_server.go:103] status: https://192.168.39.5:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1206 19:56:49.825793  115078 api_server.go:253] Checking apiserver healthz at https://192.168.39.5:8443/healthz ...
	I1206 19:56:49.888751  115078 api_server.go:279] https://192.168.39.5:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1206 19:56:49.888801  115078 api_server.go:103] status: https://192.168.39.5:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1206 19:56:50.325211  115078 api_server.go:253] Checking apiserver healthz at https://192.168.39.5:8443/healthz ...
	I1206 19:56:50.330395  115078 api_server.go:279] https://192.168.39.5:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1206 19:56:50.330438  115078 api_server.go:103] status: https://192.168.39.5:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1206 19:56:50.826038  115078 api_server.go:253] Checking apiserver healthz at https://192.168.39.5:8443/healthz ...
	I1206 19:56:50.830801  115078 api_server.go:279] https://192.168.39.5:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1206 19:56:50.830836  115078 api_server.go:103] status: https://192.168.39.5:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1206 19:56:51.325298  115078 api_server.go:253] Checking apiserver healthz at https://192.168.39.5:8443/healthz ...
	I1206 19:56:51.331295  115078 api_server.go:279] https://192.168.39.5:8443/healthz returned 200:
	ok
	I1206 19:56:51.340412  115078 api_server.go:141] control plane version: v1.29.0-rc.1
	I1206 19:56:51.340445  115078 api_server.go:131] duration metric: took 6.016598018s to wait for apiserver health ...
	I1206 19:56:51.340457  115078 cni.go:84] Creating CNI manager for ""
	I1206 19:56:51.340465  115078 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 19:56:51.383227  115078 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1206 19:56:47.671090  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:50.173835  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:49.494360  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:51.991886  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:51.385027  115078 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1206 19:56:51.399942  115078 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1206 19:56:51.422533  115078 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 19:56:51.446615  115078 system_pods.go:59] 8 kube-system pods found
	I1206 19:56:51.446661  115078 system_pods.go:61] "coredns-76f75df574-h9pkz" [05501356-bf9b-4a99-a1b9-40d0caef38db] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 19:56:51.446671  115078 system_pods.go:61] "etcd-no-preload-989559" [6c1cb748-a6a8-4583-b8fd-adf37e05b771] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1206 19:56:51.446684  115078 system_pods.go:61] "kube-apiserver-no-preload-989559" [51d8b7c6-0cef-4832-96b2-5040c0725310] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1206 19:56:51.446698  115078 system_pods.go:61] "kube-controller-manager-no-preload-989559" [cc8dfb88-9990-488f-9150-5c643143dcf1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 19:56:51.446707  115078 system_pods.go:61] "kube-proxy-zgqvt" [550b2491-c14f-47c4-82d5-1301fa351305] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1206 19:56:51.446716  115078 system_pods.go:61] "kube-scheduler-no-preload-989559" [53a5031e-51aa-4867-88ff-1c7972a0cfa7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1206 19:56:51.446731  115078 system_pods.go:61] "metrics-server-57f55c9bc5-vz7qc" [97c1bcd2-eabc-4029-bb02-5bbfd4d96c0f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 19:56:51.446739  115078 system_pods.go:61] "storage-provisioner" [c4d98de3-12ec-47f6-a6a6-f1dc61b479be] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 19:56:51.446749  115078 system_pods.go:74] duration metric: took 24.188803ms to wait for pod list to return data ...
	I1206 19:56:51.446758  115078 node_conditions.go:102] verifying NodePressure condition ...
	I1206 19:56:51.452770  115078 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1206 19:56:51.452803  115078 node_conditions.go:123] node cpu capacity is 2
	I1206 19:56:51.452817  115078 node_conditions.go:105] duration metric: took 6.05327ms to run NodePressure ...
	I1206 19:56:51.452840  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:56:51.740786  115078 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1206 19:56:51.746512  115078 kubeadm.go:787] kubelet initialised
	I1206 19:56:51.746541  115078 kubeadm.go:788] duration metric: took 5.720787ms waiting for restarted kubelet to initialise ...
	I1206 19:56:51.746550  115078 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 19:56:51.752751  115078 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-h9pkz" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:51.761003  115078 pod_ready.go:97] node "no-preload-989559" hosting pod "coredns-76f75df574-h9pkz" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:51.761032  115078 pod_ready.go:81] duration metric: took 8.254381ms waiting for pod "coredns-76f75df574-h9pkz" in "kube-system" namespace to be "Ready" ...
	E1206 19:56:51.761043  115078 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-989559" hosting pod "coredns-76f75df574-h9pkz" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:51.761052  115078 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:51.766223  115078 pod_ready.go:97] node "no-preload-989559" hosting pod "etcd-no-preload-989559" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:51.766248  115078 pod_ready.go:81] duration metric: took 5.184525ms waiting for pod "etcd-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	E1206 19:56:51.766259  115078 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-989559" hosting pod "etcd-no-preload-989559" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:51.766271  115078 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:51.771516  115078 pod_ready.go:97] node "no-preload-989559" hosting pod "kube-apiserver-no-preload-989559" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:51.771541  115078 pod_ready.go:81] duration metric: took 5.262069ms waiting for pod "kube-apiserver-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	E1206 19:56:51.771552  115078 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-989559" hosting pod "kube-apiserver-no-preload-989559" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:51.771561  115078 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:51.827774  115078 pod_ready.go:97] node "no-preload-989559" hosting pod "kube-controller-manager-no-preload-989559" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:51.827804  115078 pod_ready.go:81] duration metric: took 56.232455ms waiting for pod "kube-controller-manager-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	E1206 19:56:51.827818  115078 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-989559" hosting pod "kube-controller-manager-no-preload-989559" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:51.827826  115078 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zgqvt" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:52.231699  115078 pod_ready.go:97] node "no-preload-989559" hosting pod "kube-proxy-zgqvt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:52.231761  115078 pod_ready.go:81] duration metric: took 403.922333ms waiting for pod "kube-proxy-zgqvt" in "kube-system" namespace to be "Ready" ...
	E1206 19:56:52.231774  115078 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-989559" hosting pod "kube-proxy-zgqvt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:52.231790  115078 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:52.626827  115078 pod_ready.go:97] node "no-preload-989559" hosting pod "kube-scheduler-no-preload-989559" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:52.626863  115078 pod_ready.go:81] duration metric: took 395.06457ms waiting for pod "kube-scheduler-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	E1206 19:56:52.626877  115078 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-989559" hosting pod "kube-scheduler-no-preload-989559" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:52.626889  115078 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:53.028166  115078 pod_ready.go:97] node "no-preload-989559" hosting pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:53.028201  115078 pod_ready.go:81] duration metric: took 401.294916ms waiting for pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace to be "Ready" ...
	E1206 19:56:53.028214  115078 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-989559" hosting pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:53.028226  115078 pod_ready.go:38] duration metric: took 1.281664253s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 19:56:53.028249  115078 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 19:56:53.057673  115078 ops.go:34] apiserver oom_adj: -16
	I1206 19:56:53.057706  115078 kubeadm.go:640] restartCluster took 22.12550727s
	I1206 19:56:53.057718  115078 kubeadm.go:406] StartCluster complete in 22.179430573s
	I1206 19:56:53.057756  115078 settings.go:142] acquiring lock: {Name:mkfeb988d43ca5824ac2b3af603600358ae0dd6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 19:56:53.057857  115078 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17740-63652/kubeconfig
	I1206 19:56:53.059885  115078 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-63652/kubeconfig: {Name:mkb891a2b2c86b4a1b0f4bb8fd4e51233eb9c683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 19:56:53.060125  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 19:56:53.060244  115078 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1206 19:56:53.060337  115078 addons.go:69] Setting storage-provisioner=true in profile "no-preload-989559"
	I1206 19:56:53.060364  115078 addons.go:231] Setting addon storage-provisioner=true in "no-preload-989559"
	I1206 19:56:53.060367  115078 config.go:182] Loaded profile config "no-preload-989559": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.1
	W1206 19:56:53.060375  115078 addons.go:240] addon storage-provisioner should already be in state true
	I1206 19:56:53.060404  115078 addons.go:69] Setting default-storageclass=true in profile "no-preload-989559"
	I1206 19:56:53.060415  115078 addons.go:69] Setting metrics-server=true in profile "no-preload-989559"
	I1206 19:56:53.060430  115078 addons.go:231] Setting addon metrics-server=true in "no-preload-989559"
	I1206 19:56:53.060433  115078 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-989559"
	W1206 19:56:53.060440  115078 addons.go:240] addon metrics-server should already be in state true
	I1206 19:56:53.060452  115078 host.go:66] Checking if "no-preload-989559" exists ...
	I1206 19:56:53.060472  115078 host.go:66] Checking if "no-preload-989559" exists ...
	I1206 19:56:53.060856  115078 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:56:53.060865  115078 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:56:53.060889  115078 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:56:53.060894  115078 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:56:53.060917  115078 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:56:53.060894  115078 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:56:53.065950  115078 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-989559" context rescaled to 1 replicas
	I1206 19:56:53.065992  115078 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.5 Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 19:56:53.068038  115078 out.go:177] * Verifying Kubernetes components...
	I1206 19:56:53.069775  115078 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 19:56:53.077795  115078 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34735
	I1206 19:56:53.078120  115078 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46235
	I1206 19:56:53.078274  115078 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:56:53.078714  115078 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:56:53.078902  115078 main.go:141] libmachine: Using API Version  1
	I1206 19:56:53.078928  115078 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:56:53.079207  115078 main.go:141] libmachine: Using API Version  1
	I1206 19:56:53.079226  115078 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:56:53.079272  115078 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:56:53.079514  115078 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:56:53.079727  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetState
	I1206 19:56:53.079865  115078 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:56:53.079899  115078 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:56:53.083670  115078 addons.go:231] Setting addon default-storageclass=true in "no-preload-989559"
	W1206 19:56:53.083695  115078 addons.go:240] addon default-storageclass should already be in state true
	I1206 19:56:53.083724  115078 host.go:66] Checking if "no-preload-989559" exists ...
	I1206 19:56:53.084178  115078 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:56:53.084230  115078 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:56:53.097845  115078 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36625
	I1206 19:56:53.098357  115078 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:56:53.099058  115078 main.go:141] libmachine: Using API Version  1
	I1206 19:56:53.099080  115078 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:56:53.099409  115078 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:56:53.099633  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetState
	I1206 19:56:53.101625  115078 main.go:141] libmachine: (no-preload-989559) Calling .DriverName
	I1206 19:56:53.103641  115078 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1206 19:56:53.105081  115078 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44431
	I1206 19:56:53.105105  115078 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1206 19:56:53.105123  115078 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1206 19:56:53.105150  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHHostname
	I1206 19:56:53.104327  115078 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34423
	I1206 19:56:53.105556  115078 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:56:53.105777  115078 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:56:53.105983  115078 main.go:141] libmachine: Using API Version  1
	I1206 19:56:53.105998  115078 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:56:53.106312  115078 main.go:141] libmachine: Using API Version  1
	I1206 19:56:53.106328  115078 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:56:53.106619  115078 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:56:53.106910  115078 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:56:53.107192  115078 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:56:53.107229  115078 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:56:53.107338  115078 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:56:53.107398  115078 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:56:53.108297  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:53.108969  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:53.108999  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:53.109150  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHPort
	I1206 19:56:53.109436  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:53.109586  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHUsername
	I1206 19:56:53.109725  115078 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/no-preload-989559/id_rsa Username:docker}
	I1206 19:56:53.123985  115078 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46161
	I1206 19:56:53.124496  115078 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:56:53.125052  115078 main.go:141] libmachine: Using API Version  1
	I1206 19:56:53.125078  115078 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:56:53.125325  115078 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36509
	I1206 19:56:53.125570  115078 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:56:53.125785  115078 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:56:53.125826  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetState
	I1206 19:56:53.126385  115078 main.go:141] libmachine: Using API Version  1
	I1206 19:56:53.126413  115078 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:56:53.126875  115078 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:56:53.127050  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetState
	I1206 19:56:53.127923  115078 main.go:141] libmachine: (no-preload-989559) Calling .DriverName
	I1206 19:56:53.128212  115078 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 19:56:53.128226  115078 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 19:56:53.128242  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHHostname
	I1206 19:56:53.128747  115078 main.go:141] libmachine: (no-preload-989559) Calling .DriverName
	I1206 19:56:53.131043  115078 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 19:56:53.131487  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:53.132638  115078 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 19:56:53.132645  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:53.132651  115078 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 19:56:53.132667  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHHostname
	I1206 19:56:53.132682  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:53.132132  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHPort
	I1206 19:56:53.133425  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:53.133636  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHUsername
	I1206 19:56:53.133870  115078 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/no-preload-989559/id_rsa Username:docker}
	I1206 19:56:53.136039  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:53.136583  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:53.136612  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:53.136850  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHPort
	I1206 19:56:53.137087  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:53.137390  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHUsername
	I1206 19:56:53.137583  115078 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/no-preload-989559/id_rsa Username:docker}
	I1206 19:56:53.247726  115078 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1206 19:56:53.247751  115078 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1206 19:56:53.271421  115078 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 19:56:53.296149  115078 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1206 19:56:53.296181  115078 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1206 19:56:53.303580  115078 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 19:56:53.350607  115078 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1206 19:56:53.350607  115078 node_ready.go:35] waiting up to 6m0s for node "no-preload-989559" to be "Ready" ...
	I1206 19:56:53.355315  115078 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 19:56:53.355336  115078 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1206 19:56:53.392730  115078 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 19:56:53.624768  115078 main.go:141] libmachine: Making call to close driver server
	I1206 19:56:53.624798  115078 main.go:141] libmachine: (no-preload-989559) Calling .Close
	I1206 19:56:53.625224  115078 main.go:141] libmachine: Successfully made call to close driver server
	I1206 19:56:53.625330  115078 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 19:56:53.625353  115078 main.go:141] libmachine: Making call to close driver server
	I1206 19:56:53.625393  115078 main.go:141] libmachine: (no-preload-989559) Calling .Close
	I1206 19:56:53.625227  115078 main.go:141] libmachine: (no-preload-989559) DBG | Closing plugin on server side
	I1206 19:56:53.625849  115078 main.go:141] libmachine: Successfully made call to close driver server
	I1206 19:56:53.625874  115078 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 19:56:53.632671  115078 main.go:141] libmachine: Making call to close driver server
	I1206 19:56:53.632691  115078 main.go:141] libmachine: (no-preload-989559) Calling .Close
	I1206 19:56:53.632983  115078 main.go:141] libmachine: Successfully made call to close driver server
	I1206 19:56:53.633005  115078 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 19:56:54.433395  115078 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.12977215s)
	I1206 19:56:54.433462  115078 main.go:141] libmachine: Making call to close driver server
	I1206 19:56:54.433491  115078 main.go:141] libmachine: (no-preload-989559) Calling .Close
	I1206 19:56:54.433360  115078 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.040565961s)
	I1206 19:56:54.433546  115078 main.go:141] libmachine: Making call to close driver server
	I1206 19:56:54.433567  115078 main.go:141] libmachine: (no-preload-989559) Calling .Close
	I1206 19:56:54.433833  115078 main.go:141] libmachine: Successfully made call to close driver server
	I1206 19:56:54.433854  115078 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 19:56:54.433863  115078 main.go:141] libmachine: Making call to close driver server
	I1206 19:56:54.433867  115078 main.go:141] libmachine: (no-preload-989559) DBG | Closing plugin on server side
	I1206 19:56:54.433871  115078 main.go:141] libmachine: (no-preload-989559) Calling .Close
	I1206 19:56:54.433842  115078 main.go:141] libmachine: (no-preload-989559) DBG | Closing plugin on server side
	I1206 19:56:54.433908  115078 main.go:141] libmachine: Successfully made call to close driver server
	I1206 19:56:54.433926  115078 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 19:56:54.433939  115078 main.go:141] libmachine: Making call to close driver server
	I1206 19:56:54.433951  115078 main.go:141] libmachine: (no-preload-989559) Calling .Close
	I1206 19:56:54.434124  115078 main.go:141] libmachine: Successfully made call to close driver server
	I1206 19:56:54.434148  115078 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 19:56:54.434153  115078 main.go:141] libmachine: (no-preload-989559) DBG | Closing plugin on server side
	I1206 19:56:54.434199  115078 main.go:141] libmachine: (no-preload-989559) DBG | Closing plugin on server side
	I1206 19:56:54.434212  115078 main.go:141] libmachine: Successfully made call to close driver server
	I1206 19:56:54.434224  115078 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 19:56:54.434240  115078 addons.go:467] Verifying addon metrics-server=true in "no-preload-989559"
	I1206 19:56:54.437357  115078 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1206 19:56:50.289141  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:52.289568  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:54.438928  115078 addons.go:502] enable addons completed in 1.378684523s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1206 19:56:55.439812  115078 node_ready.go:58] node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:52.174520  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:54.175288  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:54.492713  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:56.493106  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:54.789039  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:57.288485  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:59.289450  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:57.931320  115078 node_ready.go:58] node "no-preload-989559" has status "Ready":"False"
	I1206 19:57:00.430485  115078 node_ready.go:49] node "no-preload-989559" has status "Ready":"True"
	I1206 19:57:00.430517  115078 node_ready.go:38] duration metric: took 7.079875254s waiting for node "no-preload-989559" to be "Ready" ...
	I1206 19:57:00.430530  115078 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 19:57:00.436772  115078 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-h9pkz" in "kube-system" namespace to be "Ready" ...
	I1206 19:57:00.442667  115078 pod_ready.go:92] pod "coredns-76f75df574-h9pkz" in "kube-system" namespace has status "Ready":"True"
	I1206 19:57:00.442688  115078 pod_ready.go:81] duration metric: took 5.884841ms waiting for pod "coredns-76f75df574-h9pkz" in "kube-system" namespace to be "Ready" ...
	I1206 19:57:00.442701  115078 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:56.671845  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:59.172634  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:01.175416  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:58.991760  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:00.992295  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:01.787443  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:03.787988  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:02.468096  115078 pod_ready.go:102] pod "etcd-no-preload-989559" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:04.965881  115078 pod_ready.go:92] pod "etcd-no-preload-989559" in "kube-system" namespace has status "Ready":"True"
	I1206 19:57:04.965905  115078 pod_ready.go:81] duration metric: took 4.523195911s waiting for pod "etcd-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	I1206 19:57:04.965916  115078 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	I1206 19:57:04.971414  115078 pod_ready.go:92] pod "kube-apiserver-no-preload-989559" in "kube-system" namespace has status "Ready":"True"
	I1206 19:57:04.971433  115078 pod_ready.go:81] duration metric: took 5.510214ms waiting for pod "kube-apiserver-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	I1206 19:57:04.971441  115078 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	I1206 19:57:04.977851  115078 pod_ready.go:92] pod "kube-controller-manager-no-preload-989559" in "kube-system" namespace has status "Ready":"True"
	I1206 19:57:04.977870  115078 pod_ready.go:81] duration metric: took 6.422623ms waiting for pod "kube-controller-manager-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	I1206 19:57:04.977878  115078 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zgqvt" in "kube-system" namespace to be "Ready" ...
	I1206 19:57:04.985189  115078 pod_ready.go:92] pod "kube-proxy-zgqvt" in "kube-system" namespace has status "Ready":"True"
	I1206 19:57:04.985215  115078 pod_ready.go:81] duration metric: took 7.330713ms waiting for pod "kube-proxy-zgqvt" in "kube-system" namespace to be "Ready" ...
	I1206 19:57:04.985224  115078 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	I1206 19:57:05.230810  115078 pod_ready.go:92] pod "kube-scheduler-no-preload-989559" in "kube-system" namespace has status "Ready":"True"
	I1206 19:57:05.230835  115078 pod_ready.go:81] duration metric: took 245.59313ms waiting for pod "kube-scheduler-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	I1206 19:57:05.230845  115078 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace to be "Ready" ...
	I1206 19:57:03.189551  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:05.673064  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:03.491815  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:05.991689  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:07.992156  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:05.789026  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:07.789964  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:07.538620  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:10.040533  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:08.171042  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:10.671754  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:10.490556  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:12.491886  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:10.287716  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:12.788212  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:12.538291  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:14.541614  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:12.672138  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:15.171421  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:14.992060  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:17.502730  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:14.788301  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:17.287038  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:19.288646  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:17.038893  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:19.543137  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:17.671258  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:20.170885  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:19.991949  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:22.491591  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:21.787339  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:23.788729  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:22.041590  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:24.540137  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:22.171069  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:24.670919  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:24.992198  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:27.492171  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:26.290524  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:28.787761  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:27.039132  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:29.542736  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:27.170762  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:29.171345  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:29.992006  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:32.490556  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:31.288189  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:33.787785  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:32.039418  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:34.039727  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:31.670563  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:34.170705  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:36.171236  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:34.492161  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:36.492522  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:35.788140  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:37.788283  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:36.540765  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:39.038645  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:38.171622  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:40.670580  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:38.990433  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:40.990810  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:42.992228  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:40.287403  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:42.287578  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:44.287701  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:41.039767  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:43.539800  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:45.543374  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:43.173769  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:45.670574  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:44.995625  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:47.492316  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:46.289397  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:48.787659  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:48.038286  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:50.039013  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:48.176705  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:50.670177  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:49.991919  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:52.491478  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:50.788175  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:53.288824  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:52.040785  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:54.538521  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:53.173256  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:55.670940  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:54.492526  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:56.493207  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:55.787745  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:57.788237  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:56.539097  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:59.039024  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:58.174463  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:00.674095  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:58.990652  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:00.993255  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:59.788454  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:02.287774  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:04.288180  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:01.042813  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:03.541670  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:03.171100  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:05.673480  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:03.492375  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:05.991094  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:07.992159  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:06.288916  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:08.289817  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:06.038556  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:08.038962  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:10.539560  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:08.171785  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:10.671152  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:09.993042  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:12.491776  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:10.790823  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:12.791724  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:12.540234  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:14.542433  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:12.672062  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:15.170654  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:14.993921  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:17.492163  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:15.289223  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:17.787808  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:17.038754  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:19.039749  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:17.171210  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:19.670633  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:19.991157  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:21.991531  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:19.788614  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:22.288567  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:21.040007  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:23.047504  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:25.539859  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:21.671920  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:24.173543  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:23.993354  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:26.491975  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:24.789151  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:26.789703  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:29.287981  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:28.038595  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:30.039044  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:26.670809  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:29.171281  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:28.492552  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:30.990797  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:32.991467  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:31.289190  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:33.788860  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:32.046392  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:34.538829  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:31.671784  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:33.672095  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:36.171077  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:34.992478  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:37.492021  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:35.789666  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:38.287860  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:37.038795  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:39.537643  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:38.670088  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:41.171066  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:39.991754  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:41.994379  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:40.288183  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:42.788826  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:41.539212  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:43.543524  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:43.674139  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:46.170213  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:44.491092  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:46.491632  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:45.287473  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:47.288157  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:49.289525  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:46.038254  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:48.039117  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:50.039290  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:48.170319  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:50.671091  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:48.492359  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:50.992132  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:51.787368  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:53.788448  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:52.039474  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:54.540427  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:53.169921  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:55.171727  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:53.492764  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:55.993038  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:56.287644  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:58.288171  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:57.038915  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:59.039626  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:57.671011  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:59.671928  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:58.491565  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:00.492398  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:02.994198  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:00.788591  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:02.789729  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:01.540414  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:03.547448  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:02.172546  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:04.670363  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:05.492399  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:07.991600  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:05.287805  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:07.289128  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:06.039393  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:08.040259  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:10.541882  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:06.670653  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:09.172460  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:10.491981  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:12.991797  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:09.788064  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:12.291318  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:12.544283  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:15.040829  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:11.673737  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:14.172972  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:14.992556  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:17.492610  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:14.788287  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:16.789265  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:19.287925  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:17.542363  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:20.039068  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:16.674724  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:18.675236  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:21.170028  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:19.493199  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:21.992164  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:21.288023  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:23.289315  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:22.539662  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:25.038813  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:23.170153  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:25.172299  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:24.491811  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:26.492671  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:25.788309  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:27.791911  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:27.539832  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:29.540277  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:27.671148  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:30.171591  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:28.990920  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:30.992085  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:32.992394  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:30.288522  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:32.288574  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:31.542448  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:34.039116  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:32.671751  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:35.169968  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:35.492708  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:37.992344  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:34.787925  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:36.788270  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:38.788369  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:36.539113  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:39.040215  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:37.171340  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:39.171482  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:40.491091  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:42.491915  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:40.789138  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:43.287352  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:41.538818  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:43.539787  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:41.670936  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:43.671019  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:45.671158  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:44.992666  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:47.491581  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:45.287493  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:47.787403  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:46.039500  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:48.538497  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:50.539750  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:48.171563  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:50.673901  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:49.991083  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:51.991943  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:49.788072  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:51.788139  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:53.788885  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:53.039532  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:55.539183  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:53.177102  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:55.670778  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:53.992408  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:56.492592  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:56.288587  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:58.288722  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:57.539766  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:00.038890  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:58.171948  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:00.173211  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:58.492926  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:00.992517  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:02.992971  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:00.291465  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:02.292084  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:02.039986  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:04.541022  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:02.674513  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:04.407290  115497 pod_ready.go:81] duration metric: took 4m0.000215571s waiting for pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace to be "Ready" ...
	E1206 20:00:04.407325  115497 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1206 20:00:04.407343  115497 pod_ready.go:38] duration metric: took 4m12.62023597s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 20:00:04.407376  115497 kubeadm.go:640] restartCluster took 4m33.115368763s
	W1206 20:00:04.407460  115497 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1206 20:00:04.407558  115497 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1206 20:00:05.492129  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:07.493228  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:04.788290  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:06.789396  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:08.789507  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:06.541064  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:09.040499  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:09.992817  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:12.492671  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:11.288813  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:13.788228  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:11.540420  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:13.540837  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:14.492803  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:16.991852  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:18.762771  115497 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.35517444s)
	I1206 20:00:18.762878  115497 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 20:00:18.777691  115497 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 20:00:18.788508  115497 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 20:00:18.798417  115497 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 20:00:18.798483  115497 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1206 20:00:18.858377  115497 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1206 20:00:18.858486  115497 kubeadm.go:322] [preflight] Running pre-flight checks
	I1206 20:00:19.020664  115497 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 20:00:19.020845  115497 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 20:00:19.020979  115497 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1206 20:00:19.294254  115497 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 20:00:15.788560  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:18.288173  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:19.296186  115497 out.go:204]   - Generating certificates and keys ...
	I1206 20:00:19.296294  115497 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1206 20:00:19.296394  115497 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1206 20:00:19.296512  115497 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 20:00:19.296601  115497 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1206 20:00:19.296712  115497 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 20:00:19.296779  115497 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1206 20:00:19.296938  115497 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1206 20:00:19.297044  115497 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1206 20:00:19.297141  115497 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 20:00:19.297228  115497 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 20:00:19.297296  115497 kubeadm.go:322] [certs] Using the existing "sa" key
	I1206 20:00:19.297374  115497 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 20:00:19.401712  115497 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 20:00:19.667664  115497 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 20:00:19.977926  115497 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 20:00:20.161984  115497 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 20:00:20.162704  115497 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 20:00:20.165273  115497 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 20:00:16.040687  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:18.540495  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:20.167168  115497 out.go:204]   - Booting up control plane ...
	I1206 20:00:20.167327  115497 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 20:00:20.167488  115497 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 20:00:20.167596  115497 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 20:00:20.186839  115497 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 20:00:20.187950  115497 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 20:00:20.188122  115497 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1206 20:00:20.329099  115497 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1206 20:00:18.991946  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:21.490687  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:20.290780  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:22.293161  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:23.270450  115591 pod_ready.go:81] duration metric: took 4m0.000401122s waiting for pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace to be "Ready" ...
	E1206 20:00:23.270504  115591 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1206 20:00:23.270527  115591 pod_ready.go:38] duration metric: took 4m9.100871827s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 20:00:23.270576  115591 kubeadm.go:640] restartCluster took 4m28.999844958s
	W1206 20:00:23.270666  115591 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1206 20:00:23.270705  115591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1206 20:00:21.040410  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:23.041625  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:25.044168  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:23.492875  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:24.689131  115217 pod_ready.go:81] duration metric: took 4m0.000750192s waiting for pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace to be "Ready" ...
	E1206 20:00:24.689173  115217 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1206 20:00:24.689203  115217 pod_ready.go:38] duration metric: took 4m1.202987977s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 20:00:24.689247  115217 kubeadm.go:640] restartCluster took 5m10.459408033s
	W1206 20:00:24.689356  115217 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1206 20:00:24.689392  115217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1206 20:00:29.334312  115497 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.004152 seconds
	I1206 20:00:29.334473  115497 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 20:00:29.360390  115497 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 20:00:29.898911  115497 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 20:00:29.899167  115497 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-380424 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1206 20:00:30.416589  115497 kubeadm.go:322] [bootstrap-token] Using token: gsw79m.btql0t11yc11efah
	I1206 20:00:30.418388  115497 out.go:204]   - Configuring RBAC rules ...
	I1206 20:00:30.418538  115497 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 20:00:30.424651  115497 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1206 20:00:30.439637  115497 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 20:00:30.443854  115497 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 20:00:30.448439  115497 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 20:00:30.454084  115497 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 20:00:30.473340  115497 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1206 20:00:30.748803  115497 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1206 20:00:30.835721  115497 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1206 20:00:30.837289  115497 kubeadm.go:322] 
	I1206 20:00:30.837362  115497 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1206 20:00:30.837381  115497 kubeadm.go:322] 
	I1206 20:00:30.837449  115497 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1206 20:00:30.837457  115497 kubeadm.go:322] 
	I1206 20:00:30.837485  115497 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1206 20:00:30.837597  115497 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 20:00:30.837675  115497 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 20:00:30.837684  115497 kubeadm.go:322] 
	I1206 20:00:30.837760  115497 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1206 20:00:30.837770  115497 kubeadm.go:322] 
	I1206 20:00:30.837826  115497 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1206 20:00:30.837833  115497 kubeadm.go:322] 
	I1206 20:00:30.837899  115497 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1206 20:00:30.838016  115497 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 20:00:30.838114  115497 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 20:00:30.838124  115497 kubeadm.go:322] 
	I1206 20:00:30.838224  115497 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1206 20:00:30.838316  115497 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1206 20:00:30.838333  115497 kubeadm.go:322] 
	I1206 20:00:30.838409  115497 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token gsw79m.btql0t11yc11efah \
	I1206 20:00:30.838522  115497 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3173d0205a58c67077c3594ee458dc14bc41fcece32682cbfd9ea1126e12b817 \
	I1206 20:00:30.838559  115497 kubeadm.go:322] 	--control-plane 
	I1206 20:00:30.838568  115497 kubeadm.go:322] 
	I1206 20:00:30.838686  115497 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1206 20:00:30.838699  115497 kubeadm.go:322] 
	I1206 20:00:30.838805  115497 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token gsw79m.btql0t11yc11efah \
	I1206 20:00:30.838952  115497 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3173d0205a58c67077c3594ee458dc14bc41fcece32682cbfd9ea1126e12b817 
	I1206 20:00:30.839686  115497 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 20:00:30.839714  115497 cni.go:84] Creating CNI manager for ""
	I1206 20:00:30.839727  115497 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 20:00:30.841824  115497 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1206 20:00:27.540848  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:30.038457  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:30.843246  115497 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1206 20:00:30.916583  115497 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1206 20:00:30.974088  115497 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 20:00:30.974183  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:30.974183  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=31a3600ce72029d920a55140bbc6d0705e357503 minikube.k8s.io/name=default-k8s-diff-port-380424 minikube.k8s.io/updated_at=2023_12_06T20_00_30_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:31.400910  115497 ops.go:34] apiserver oom_adj: -16
	I1206 20:00:31.401056  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:31.320362  115217 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (6.630947418s)
	I1206 20:00:31.320445  115217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 20:00:31.349765  115217 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 20:00:31.369412  115217 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 20:00:31.381350  115217 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 20:00:31.381410  115217 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1206 20:00:31.626397  115217 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 20:00:32.039425  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:34.041934  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:31.516285  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:32.139221  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:32.639059  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:33.139995  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:33.639038  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:34.139842  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:34.640037  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:35.139893  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:35.639961  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:36.139749  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:38.383787  115591 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (15.113041618s)
	I1206 20:00:38.383859  115591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 20:00:38.397718  115591 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 20:00:38.406748  115591 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 20:00:38.415574  115591 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 20:00:38.415633  115591 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1206 20:00:38.485595  115591 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1206 20:00:38.485781  115591 kubeadm.go:322] [preflight] Running pre-flight checks
	I1206 20:00:38.659892  115591 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 20:00:38.660073  115591 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 20:00:38.660209  115591 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1206 20:00:38.939756  115591 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 20:00:38.941971  115591 out.go:204]   - Generating certificates and keys ...
	I1206 20:00:38.942103  115591 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1206 20:00:38.942200  115591 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1206 20:00:38.942296  115591 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 20:00:38.942708  115591 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1206 20:00:38.943817  115591 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 20:00:38.944130  115591 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1206 20:00:38.944894  115591 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1206 20:00:38.945607  115591 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1206 20:00:38.946355  115591 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 20:00:38.947015  115591 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 20:00:38.947720  115591 kubeadm.go:322] [certs] Using the existing "sa" key
	I1206 20:00:38.947795  115591 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 20:00:39.140045  115591 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 20:00:39.300047  115591 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 20:00:39.418439  115591 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 20:00:40.060938  115591 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 20:00:40.061616  115591 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 20:00:40.064208  115591 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 20:00:36.042049  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:38.540429  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:36.639372  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:37.139213  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:37.639506  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:38.139159  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:38.639007  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:39.139972  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:39.639969  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:40.139910  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:40.639836  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:41.139009  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:41.639153  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:42.139055  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:42.639853  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:43.139934  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:43.639741  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:44.139776  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:44.279581  115497 kubeadm.go:1088] duration metric: took 13.305461955s to wait for elevateKubeSystemPrivileges.
	I1206 20:00:44.279625  115497 kubeadm.go:406] StartCluster complete in 5m13.04588426s
	I1206 20:00:44.279660  115497 settings.go:142] acquiring lock: {Name:mkfeb988d43ca5824ac2b3af603600358ae0dd6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 20:00:44.279765  115497 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17740-63652/kubeconfig
	I1206 20:00:44.282748  115497 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-63652/kubeconfig: {Name:mkb891a2b2c86b4a1b0f4bb8fd4e51233eb9c683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 20:00:44.285263  115497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 20:00:44.285351  115497 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1206 20:00:44.285434  115497 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-380424"
	I1206 20:00:44.285459  115497 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-380424"
	W1206 20:00:44.285471  115497 addons.go:240] addon storage-provisioner should already be in state true
	I1206 20:00:44.285478  115497 config.go:182] Loaded profile config "default-k8s-diff-port-380424": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 20:00:44.285531  115497 host.go:66] Checking if "default-k8s-diff-port-380424" exists ...
	I1206 20:00:44.285542  115497 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-380424"
	I1206 20:00:44.285561  115497 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-380424"
	I1206 20:00:44.285719  115497 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-380424"
	I1206 20:00:44.285738  115497 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-380424"
	W1206 20:00:44.285747  115497 addons.go:240] addon metrics-server should already be in state true
	I1206 20:00:44.285797  115497 host.go:66] Checking if "default-k8s-diff-port-380424" exists ...
	I1206 20:00:44.285998  115497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:00:44.285998  115497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:00:44.286023  115497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:00:44.286026  115497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:00:44.286167  115497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:00:44.286190  115497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:00:44.306223  115497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41495
	I1206 20:00:44.306441  115497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39661
	I1206 20:00:44.307505  115497 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:00:44.307637  115497 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:00:44.308463  115497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41881
	I1206 20:00:44.308651  115497 main.go:141] libmachine: Using API Version  1
	I1206 20:00:44.308672  115497 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:00:44.309154  115497 main.go:141] libmachine: Using API Version  1
	I1206 20:00:44.309173  115497 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:00:44.309295  115497 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:00:44.309539  115497 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:00:44.310150  115497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:00:44.310183  115497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:00:44.310431  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetState
	I1206 20:00:44.312432  115497 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:00:44.313004  115497 main.go:141] libmachine: Using API Version  1
	I1206 20:00:44.313020  115497 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:00:44.315047  115497 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-380424"
	W1206 20:00:44.315065  115497 addons.go:240] addon default-storageclass should already be in state true
	I1206 20:00:44.315094  115497 host.go:66] Checking if "default-k8s-diff-port-380424" exists ...
	I1206 20:00:44.315499  115497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:00:44.315523  115497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:00:44.316248  115497 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:00:44.316893  115497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:00:44.316920  115497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:00:44.335555  115497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43199
	I1206 20:00:44.335908  115497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45127
	I1206 20:00:44.336636  115497 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:00:44.336749  115497 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:00:44.337379  115497 main.go:141] libmachine: Using API Version  1
	I1206 20:00:44.337404  115497 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:00:44.337791  115497 main.go:141] libmachine: Using API Version  1
	I1206 20:00:44.337818  115497 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:00:44.337895  115497 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:00:44.338474  115497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:00:44.338502  115497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:00:44.338944  115497 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-380424" context rescaled to 1 replicas
	I1206 20:00:44.338979  115497 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.22 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 20:00:44.340731  115497 out.go:177] * Verifying Kubernetes components...
	I1206 20:00:44.339696  115497 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:00:44.342367  115497 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 20:00:44.342537  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetState
	I1206 20:00:44.348774  115497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35461
	I1206 20:00:44.348808  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .DriverName
	I1206 20:00:44.350935  115497 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1206 20:00:44.349433  115497 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:00:44.353022  115497 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1206 20:00:44.353036  115497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1206 20:00:44.353060  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHHostname
	I1206 20:00:44.353493  115497 main.go:141] libmachine: Using API Version  1
	I1206 20:00:44.353512  115497 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:00:44.354850  115497 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:00:44.355732  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetState
	I1206 20:00:44.356894  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 20:00:44.359438  115497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38795
	I1206 20:00:44.360009  115497 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:00:44.360502  115497 main.go:141] libmachine: Using API Version  1
	I1206 20:00:44.360525  115497 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:00:44.360899  115497 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:00:44.361092  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetState
	I1206 20:00:44.362575  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 20:00:44.362605  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 20:00:44.362663  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHPort
	I1206 20:00:44.363067  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 20:00:44.363259  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHUsername
	I1206 20:00:44.363310  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .DriverName
	I1206 20:00:44.363544  115497 sshutil.go:53] new ssh client: &{IP:192.168.72.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/default-k8s-diff-port-380424/id_rsa Username:docker}
	I1206 20:00:44.363628  115497 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 20:00:44.363643  115497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 20:00:44.363663  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHHostname
	I1206 20:00:44.365352  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .DriverName
	I1206 20:00:44.367261  115497 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 20:00:40.066048  115591 out.go:204]   - Booting up control plane ...
	I1206 20:00:40.066207  115591 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 20:00:40.066320  115591 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 20:00:40.069077  115591 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 20:00:40.086558  115591 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 20:00:40.087856  115591 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 20:00:40.087969  115591 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1206 20:00:40.224157  115591 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1206 20:00:45.313051  115217 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1206 20:00:45.313125  115217 kubeadm.go:322] [preflight] Running pre-flight checks
	I1206 20:00:45.313226  115217 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 20:00:45.313355  115217 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 20:00:45.313466  115217 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1206 20:00:45.313591  115217 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 20:00:45.313697  115217 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 20:00:45.313767  115217 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1206 20:00:45.313844  115217 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 20:00:45.315759  115217 out.go:204]   - Generating certificates and keys ...
	I1206 20:00:45.315876  115217 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1206 20:00:45.315980  115217 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1206 20:00:45.316085  115217 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 20:00:45.316158  115217 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1206 20:00:45.316252  115217 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 20:00:45.316320  115217 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1206 20:00:45.316420  115217 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1206 20:00:45.316505  115217 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1206 20:00:45.316608  115217 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 20:00:45.316707  115217 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 20:00:45.316761  115217 kubeadm.go:322] [certs] Using the existing "sa" key
	I1206 20:00:45.316838  115217 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 20:00:45.316909  115217 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 20:00:45.316982  115217 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 20:00:45.317068  115217 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 20:00:45.317136  115217 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 20:00:45.317221  115217 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 20:00:45.318915  115217 out.go:204]   - Booting up control plane ...
	I1206 20:00:45.319042  115217 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 20:00:45.319145  115217 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 20:00:45.319253  115217 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 20:00:45.319367  115217 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 20:00:45.319568  115217 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1206 20:00:45.319690  115217 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.504419 seconds
	I1206 20:00:45.319828  115217 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 20:00:45.319978  115217 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 20:00:45.320042  115217 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 20:00:45.320189  115217 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-448851 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1206 20:00:45.320255  115217 kubeadm.go:322] [bootstrap-token] Using token: ms33mw.f0m2wm1rokle0nnu
	I1206 20:00:45.321976  115217 out.go:204]   - Configuring RBAC rules ...
	I1206 20:00:45.322105  115217 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 20:00:45.322229  115217 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 20:00:45.322373  115217 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 20:00:45.322532  115217 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 20:00:45.322673  115217 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 20:00:45.322759  115217 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1206 20:00:45.322845  115217 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1206 20:00:45.322857  115217 kubeadm.go:322] 
	I1206 20:00:45.322936  115217 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1206 20:00:45.322945  115217 kubeadm.go:322] 
	I1206 20:00:45.323055  115217 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1206 20:00:45.323071  115217 kubeadm.go:322] 
	I1206 20:00:45.323105  115217 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1206 20:00:45.323196  115217 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 20:00:45.323270  115217 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 20:00:45.323282  115217 kubeadm.go:322] 
	I1206 20:00:45.323373  115217 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1206 20:00:45.323477  115217 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 20:00:45.323575  115217 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 20:00:45.323590  115217 kubeadm.go:322] 
	I1206 20:00:45.323736  115217 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1206 20:00:45.323840  115217 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1206 20:00:45.323855  115217 kubeadm.go:322] 
	I1206 20:00:45.323984  115217 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ms33mw.f0m2wm1rokle0nnu \
	I1206 20:00:45.324187  115217 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:3173d0205a58c67077c3594ee458dc14bc41fcece32682cbfd9ea1126e12b817 \
	I1206 20:00:45.324248  115217 kubeadm.go:322]     --control-plane 	  
	I1206 20:00:45.324266  115217 kubeadm.go:322] 
	I1206 20:00:45.324386  115217 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1206 20:00:45.324397  115217 kubeadm.go:322] 
	I1206 20:00:45.324501  115217 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ms33mw.f0m2wm1rokle0nnu \
	I1206 20:00:45.324651  115217 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:3173d0205a58c67077c3594ee458dc14bc41fcece32682cbfd9ea1126e12b817 
	I1206 20:00:45.324664  115217 cni.go:84] Creating CNI manager for ""
	I1206 20:00:45.324675  115217 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 20:00:45.327284  115217 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1206 20:00:41.039495  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:43.041892  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:45.042744  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:44.369437  115497 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 20:00:44.369449  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 20:00:44.369458  115497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 20:00:44.369482  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHHostname
	I1206 20:00:44.373360  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 20:00:44.373394  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 20:00:44.373415  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 20:00:44.373465  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHPort
	I1206 20:00:44.373538  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 20:00:44.373554  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 20:00:44.373769  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 20:00:44.373830  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHPort
	I1206 20:00:44.374020  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 20:00:44.374077  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHUsername
	I1206 20:00:44.374221  115497 sshutil.go:53] new ssh client: &{IP:192.168.72.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/default-k8s-diff-port-380424/id_rsa Username:docker}
	I1206 20:00:44.374800  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHUsername
	I1206 20:00:44.375017  115497 sshutil.go:53] new ssh client: &{IP:192.168.72.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/default-k8s-diff-port-380424/id_rsa Username:docker}
	I1206 20:00:44.528574  115497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 20:00:44.553349  115497 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1206 20:00:44.553382  115497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1206 20:00:44.604100  115497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 20:00:44.605360  115497 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-380424" to be "Ready" ...
	I1206 20:00:44.605799  115497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 20:00:44.610007  115497 node_ready.go:49] node "default-k8s-diff-port-380424" has status "Ready":"True"
	I1206 20:00:44.610039  115497 node_ready.go:38] duration metric: took 4.647914ms waiting for node "default-k8s-diff-port-380424" to be "Ready" ...
	I1206 20:00:44.610052  115497 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 20:00:44.622684  115497 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-x6p7t" in "kube-system" namespace to be "Ready" ...
	I1206 20:00:44.639914  115497 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1206 20:00:44.640005  115497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1206 20:00:44.710284  115497 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 20:00:44.710318  115497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1206 20:00:44.767014  115497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 20:00:46.656182  115497 pod_ready.go:102] pod "coredns-5dd5756b68-x6p7t" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:46.941717  115497 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.413097049s)
	I1206 20:00:46.941764  115497 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.33594011s)
	I1206 20:00:46.941787  115497 start.go:929] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1206 20:00:46.941793  115497 main.go:141] libmachine: Making call to close driver server
	I1206 20:00:46.941733  115497 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.337595925s)
	I1206 20:00:46.941808  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .Close
	I1206 20:00:46.941841  115497 main.go:141] libmachine: Making call to close driver server
	I1206 20:00:46.941863  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .Close
	I1206 20:00:46.942167  115497 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:00:46.942187  115497 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:00:46.942198  115497 main.go:141] libmachine: Making call to close driver server
	I1206 20:00:46.942207  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .Close
	I1206 20:00:46.943997  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | Closing plugin on server side
	I1206 20:00:46.944031  115497 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:00:46.944041  115497 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:00:46.944052  115497 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:00:46.944060  115497 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:00:46.944057  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | Closing plugin on server side
	I1206 20:00:46.944077  115497 main.go:141] libmachine: Making call to close driver server
	I1206 20:00:46.944088  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .Close
	I1206 20:00:46.944363  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | Closing plugin on server side
	I1206 20:00:46.944401  115497 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:00:46.944419  115497 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:00:46.984172  115497 main.go:141] libmachine: Making call to close driver server
	I1206 20:00:46.984206  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .Close
	I1206 20:00:46.984675  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | Closing plugin on server side
	I1206 20:00:46.984714  115497 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:00:46.984733  115497 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:00:47.345448  115497 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.5783821s)
	I1206 20:00:47.345552  115497 main.go:141] libmachine: Making call to close driver server
	I1206 20:00:47.345573  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .Close
	I1206 20:00:47.345987  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | Closing plugin on server side
	I1206 20:00:47.346033  115497 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:00:47.346046  115497 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:00:47.346056  115497 main.go:141] libmachine: Making call to close driver server
	I1206 20:00:47.346088  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .Close
	I1206 20:00:47.346359  115497 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:00:47.346380  115497 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:00:47.346392  115497 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-380424"
	I1206 20:00:47.346442  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | Closing plugin on server side
	I1206 20:00:47.348281  115497 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1206 20:00:45.328763  115217 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1206 20:00:45.342986  115217 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1206 20:00:45.373351  115217 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 20:00:45.373503  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:45.373559  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=31a3600ce72029d920a55140bbc6d0705e357503 minikube.k8s.io/name=old-k8s-version-448851 minikube.k8s.io/updated_at=2023_12_06T20_00_45_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:45.701779  115217 ops.go:34] apiserver oom_adj: -16
	I1206 20:00:45.701907  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:45.815705  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:46.445065  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:46.945361  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:47.444737  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:47.945540  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:49.228883  115591 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.004688 seconds
	I1206 20:00:49.229058  115591 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 20:00:49.258512  115591 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 20:00:49.793797  115591 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 20:00:49.794010  115591 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-209025 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1206 20:00:50.315415  115591 kubeadm.go:322] [bootstrap-token] Using token: j4xv0f.htia0y0wrnbqnji6
	I1206 20:00:47.349693  115497 addons.go:502] enable addons completed in 3.064343142s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1206 20:00:48.648085  115497 pod_ready.go:92] pod "coredns-5dd5756b68-x6p7t" in "kube-system" namespace has status "Ready":"True"
	I1206 20:00:48.648116  115497 pod_ready.go:81] duration metric: took 4.025396521s waiting for pod "coredns-5dd5756b68-x6p7t" in "kube-system" namespace to be "Ready" ...
	I1206 20:00:48.648132  115497 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 20:00:48.660202  115497 pod_ready.go:92] pod "etcd-default-k8s-diff-port-380424" in "kube-system" namespace has status "Ready":"True"
	I1206 20:00:48.660235  115497 pod_ready.go:81] duration metric: took 12.09317ms waiting for pod "etcd-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 20:00:48.660248  115497 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 20:00:48.666568  115497 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-380424" in "kube-system" namespace has status "Ready":"True"
	I1206 20:00:48.666666  115497 pod_ready.go:81] duration metric: took 6.407781ms waiting for pod "kube-apiserver-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 20:00:48.666694  115497 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 20:00:48.679566  115497 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-380424" in "kube-system" namespace has status "Ready":"True"
	I1206 20:00:48.679653  115497 pod_ready.go:81] duration metric: took 12.938485ms waiting for pod "kube-controller-manager-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 20:00:48.679675  115497 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-khh5n" in "kube-system" namespace to be "Ready" ...
	I1206 20:00:49.554241  115497 pod_ready.go:92] pod "kube-proxy-khh5n" in "kube-system" namespace has status "Ready":"True"
	I1206 20:00:49.554266  115497 pod_ready.go:81] duration metric: took 874.584613ms waiting for pod "kube-proxy-khh5n" in "kube-system" namespace to be "Ready" ...
	I1206 20:00:49.554275  115497 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 20:00:49.845110  115497 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-380424" in "kube-system" namespace has status "Ready":"True"
	I1206 20:00:49.845140  115497 pod_ready.go:81] duration metric: took 290.857787ms waiting for pod "kube-scheduler-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 20:00:49.845152  115497 pod_ready.go:38] duration metric: took 5.235087469s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 20:00:49.845172  115497 api_server.go:52] waiting for apiserver process to appear ...
	I1206 20:00:49.845251  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 20:00:49.861908  115497 api_server.go:72] duration metric: took 5.522870891s to wait for apiserver process to appear ...
	I1206 20:00:49.861943  115497 api_server.go:88] waiting for apiserver healthz status ...
	I1206 20:00:49.861965  115497 api_server.go:253] Checking apiserver healthz at https://192.168.72.22:8444/healthz ...
	I1206 20:00:49.868675  115497 api_server.go:279] https://192.168.72.22:8444/healthz returned 200:
	ok
	I1206 20:00:49.870214  115497 api_server.go:141] control plane version: v1.28.4
	I1206 20:00:49.870254  115497 api_server.go:131] duration metric: took 8.303187ms to wait for apiserver health ...
	I1206 20:00:49.870266  115497 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 20:00:50.047974  115497 system_pods.go:59] 8 kube-system pods found
	I1206 20:00:50.048004  115497 system_pods.go:61] "coredns-5dd5756b68-x6p7t" [de75d299-fede-4fe1-a748-31720acc76eb] Running
	I1206 20:00:50.048011  115497 system_pods.go:61] "etcd-default-k8s-diff-port-380424" [36170db0-a926-4c8d-8283-9af453167ee1] Running
	I1206 20:00:50.048018  115497 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-380424" [72412f12-9e20-4905-89ad-65c67a2e5a7b] Running
	I1206 20:00:50.048025  115497 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-380424" [04d32349-9a28-4270-bd15-2275e74b6713] Running
	I1206 20:00:50.048030  115497 system_pods.go:61] "kube-proxy-khh5n" [acac843d-9849-4bda-af66-2422b319665e] Running
	I1206 20:00:50.048036  115497 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-380424" [a5b9f2ed-8cb1-4912-af86-d231d9b275ba] Running
	I1206 20:00:50.048045  115497 system_pods.go:61] "metrics-server-57f55c9bc5-xpbtp" [280fb2bc-d8d8-4684-8be1-ec0ace47ef77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:00:50.048052  115497 system_pods.go:61] "storage-provisioner" [e1def8b1-c6bb-48df-b2f2-34867a409cb7] Running
	I1206 20:00:50.048063  115497 system_pods.go:74] duration metric: took 177.789423ms to wait for pod list to return data ...
	I1206 20:00:50.048073  115497 default_sa.go:34] waiting for default service account to be created ...
	I1206 20:00:50.246867  115497 default_sa.go:45] found service account: "default"
	I1206 20:00:50.246903  115497 default_sa.go:55] duration metric: took 198.823117ms for default service account to be created ...
	I1206 20:00:50.246914  115497 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 20:00:50.447688  115497 system_pods.go:86] 8 kube-system pods found
	I1206 20:00:50.447777  115497 system_pods.go:89] "coredns-5dd5756b68-x6p7t" [de75d299-fede-4fe1-a748-31720acc76eb] Running
	I1206 20:00:50.447798  115497 system_pods.go:89] "etcd-default-k8s-diff-port-380424" [36170db0-a926-4c8d-8283-9af453167ee1] Running
	I1206 20:00:50.447815  115497 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-380424" [72412f12-9e20-4905-89ad-65c67a2e5a7b] Running
	I1206 20:00:50.447846  115497 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-380424" [04d32349-9a28-4270-bd15-2275e74b6713] Running
	I1206 20:00:50.447870  115497 system_pods.go:89] "kube-proxy-khh5n" [acac843d-9849-4bda-af66-2422b319665e] Running
	I1206 20:00:50.447886  115497 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-380424" [a5b9f2ed-8cb1-4912-af86-d231d9b275ba] Running
	I1206 20:00:50.447904  115497 system_pods.go:89] "metrics-server-57f55c9bc5-xpbtp" [280fb2bc-d8d8-4684-8be1-ec0ace47ef77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:00:50.447920  115497 system_pods.go:89] "storage-provisioner" [e1def8b1-c6bb-48df-b2f2-34867a409cb7] Running
	I1206 20:00:50.447953  115497 system_pods.go:126] duration metric: took 201.030369ms to wait for k8s-apps to be running ...
	I1206 20:00:50.447978  115497 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 20:00:50.448057  115497 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 20:00:50.468801  115497 system_svc.go:56] duration metric: took 20.810606ms WaitForService to wait for kubelet.
	I1206 20:00:50.468837  115497 kubeadm.go:581] duration metric: took 6.129827661s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1206 20:00:50.468860  115497 node_conditions.go:102] verifying NodePressure condition ...
	I1206 20:00:50.646083  115497 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1206 20:00:50.646124  115497 node_conditions.go:123] node cpu capacity is 2
	I1206 20:00:50.646138  115497 node_conditions.go:105] duration metric: took 177.272089ms to run NodePressure ...
	I1206 20:00:50.646153  115497 start.go:228] waiting for startup goroutines ...
	I1206 20:00:50.646164  115497 start.go:233] waiting for cluster config update ...
	I1206 20:00:50.646184  115497 start.go:242] writing updated cluster config ...
	I1206 20:00:50.646551  115497 ssh_runner.go:195] Run: rm -f paused
	I1206 20:00:50.711246  115497 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1206 20:00:50.713989  115497 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-380424" cluster and "default" namespace by default
	I1206 20:00:50.317018  115591 out.go:204]   - Configuring RBAC rules ...
	I1206 20:00:50.317155  115591 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 20:00:50.325410  115591 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1206 20:00:50.335197  115591 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 20:00:50.339351  115591 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 20:00:50.343930  115591 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 20:00:50.352323  115591 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 20:00:50.375514  115591 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1206 20:00:50.703397  115591 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1206 20:00:50.753323  115591 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1206 20:00:50.753351  115591 kubeadm.go:322] 
	I1206 20:00:50.753419  115591 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1206 20:00:50.753430  115591 kubeadm.go:322] 
	I1206 20:00:50.753522  115591 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1206 20:00:50.753539  115591 kubeadm.go:322] 
	I1206 20:00:50.753570  115591 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1206 20:00:50.753642  115591 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 20:00:50.753706  115591 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 20:00:50.753717  115591 kubeadm.go:322] 
	I1206 20:00:50.753780  115591 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1206 20:00:50.753790  115591 kubeadm.go:322] 
	I1206 20:00:50.753847  115591 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1206 20:00:50.753862  115591 kubeadm.go:322] 
	I1206 20:00:50.753928  115591 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1206 20:00:50.754020  115591 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 20:00:50.754109  115591 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 20:00:50.754120  115591 kubeadm.go:322] 
	I1206 20:00:50.754221  115591 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1206 20:00:50.754317  115591 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1206 20:00:50.754327  115591 kubeadm.go:322] 
	I1206 20:00:50.754426  115591 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token j4xv0f.htia0y0wrnbqnji6 \
	I1206 20:00:50.754552  115591 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3173d0205a58c67077c3594ee458dc14bc41fcece32682cbfd9ea1126e12b817 \
	I1206 20:00:50.754583  115591 kubeadm.go:322] 	--control-plane 
	I1206 20:00:50.754593  115591 kubeadm.go:322] 
	I1206 20:00:50.754690  115591 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1206 20:00:50.754707  115591 kubeadm.go:322] 
	I1206 20:00:50.754802  115591 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token j4xv0f.htia0y0wrnbqnji6 \
	I1206 20:00:50.754931  115591 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3173d0205a58c67077c3594ee458dc14bc41fcece32682cbfd9ea1126e12b817 
	I1206 20:00:50.755776  115591 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 20:00:50.755809  115591 cni.go:84] Creating CNI manager for ""
	I1206 20:00:50.755820  115591 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 20:00:50.759045  115591 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1206 20:00:47.539932  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:50.039553  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:48.445172  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:48.944908  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:49.445418  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:49.944612  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:50.445278  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:50.944545  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:51.444775  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:51.945470  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:52.445365  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:52.944742  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:50.760722  115591 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1206 20:00:50.792095  115591 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1206 20:00:50.854264  115591 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 20:00:50.854443  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:50.854549  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=31a3600ce72029d920a55140bbc6d0705e357503 minikube.k8s.io/name=embed-certs-209025 minikube.k8s.io/updated_at=2023_12_06T20_00_50_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:50.894717  115591 ops.go:34] apiserver oom_adj: -16
	I1206 20:00:51.388829  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:51.515185  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:52.132878  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:52.633171  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:53.132766  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:53.632887  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:54.132824  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:52.044531  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:54.538924  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:53.444641  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:53.945468  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:54.444996  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:54.944687  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:55.444757  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:55.945342  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:56.445585  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:56.945489  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:57.445628  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:57.944895  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:54.632961  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:55.132361  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:55.632305  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:56.132439  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:56.632252  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:57.132956  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:57.633210  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:58.133090  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:58.632198  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:59.133167  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:58.445440  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:58.945554  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:59.445298  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:59.945574  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:01:00.179151  115217 kubeadm.go:1088] duration metric: took 14.805687634s to wait for elevateKubeSystemPrivileges.
	I1206 20:01:00.179185  115217 kubeadm.go:406] StartCluster complete in 5m46.007596294s
	I1206 20:01:00.179204  115217 settings.go:142] acquiring lock: {Name:mkfeb988d43ca5824ac2b3af603600358ae0dd6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 20:01:00.179291  115217 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17740-63652/kubeconfig
	I1206 20:01:00.181490  115217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-63652/kubeconfig: {Name:mkb891a2b2c86b4a1b0f4bb8fd4e51233eb9c683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 20:01:00.181810  115217 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 20:01:00.181933  115217 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1206 20:01:00.182031  115217 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-448851"
	I1206 20:01:00.182063  115217 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-448851"
	W1206 20:01:00.182071  115217 addons.go:240] addon storage-provisioner should already be in state true
	I1206 20:01:00.182126  115217 host.go:66] Checking if "old-k8s-version-448851" exists ...
	I1206 20:01:00.182126  115217 config.go:182] Loaded profile config "old-k8s-version-448851": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1206 20:01:00.182180  115217 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-448851"
	I1206 20:01:00.182198  115217 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-448851"
	I1206 20:01:00.182554  115217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:01:00.182572  115217 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-448851"
	I1206 20:01:00.182581  115217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:01:00.182591  115217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:01:00.182596  115217 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-448851"
	W1206 20:01:00.182606  115217 addons.go:240] addon metrics-server should already be in state true
	I1206 20:01:00.182613  115217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:01:00.182735  115217 host.go:66] Checking if "old-k8s-version-448851" exists ...
	I1206 20:01:00.183101  115217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:01:00.183146  115217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:01:00.201450  115217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38847
	I1206 20:01:00.203683  115217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39291
	I1206 20:01:00.203715  115217 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:01:00.203800  115217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40089
	I1206 20:01:00.204181  115217 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:01:00.204341  115217 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:01:00.204386  115217 main.go:141] libmachine: Using API Version  1
	I1206 20:01:00.204409  115217 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:01:00.204863  115217 main.go:141] libmachine: Using API Version  1
	I1206 20:01:00.204877  115217 main.go:141] libmachine: Using API Version  1
	I1206 20:01:00.204884  115217 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:01:00.204895  115217 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:01:00.204950  115217 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:01:00.205328  115217 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:01:00.205333  115217 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:01:00.205489  115217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:01:00.205520  115217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:01:00.205560  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetState
	I1206 20:01:00.205992  115217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:01:00.206064  115217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:01:00.209487  115217 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-448851"
	W1206 20:01:00.209512  115217 addons.go:240] addon default-storageclass should already be in state true
	I1206 20:01:00.209545  115217 host.go:66] Checking if "old-k8s-version-448851" exists ...
	I1206 20:01:00.209987  115217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:01:00.210033  115217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:01:00.227092  115217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42411
	I1206 20:01:00.227961  115217 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:01:00.228610  115217 main.go:141] libmachine: Using API Version  1
	I1206 20:01:00.228633  115217 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:01:00.229107  115217 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:01:00.229342  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetState
	I1206 20:01:00.230638  115217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42917
	I1206 20:01:00.231552  115217 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:01:00.231863  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .DriverName
	I1206 20:01:00.235076  115217 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 20:01:00.232196  115217 main.go:141] libmachine: Using API Version  1
	I1206 20:01:00.232926  115217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44245
	I1206 20:01:00.237258  115217 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:01:00.237284  115217 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 20:01:00.237310  115217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 20:01:00.237333  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHHostname
	I1206 20:01:00.237682  115217 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:01:00.238034  115217 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:01:00.238212  115217 main.go:141] libmachine: Using API Version  1
	I1206 20:01:00.238240  115217 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:01:00.238580  115217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:01:00.238612  115217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:01:00.238977  115217 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:01:00.239198  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetState
	I1206 20:01:00.240631  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .DriverName
	I1206 20:01:00.243107  115217 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1206 20:01:00.241155  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 20:01:00.241833  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHPort
	I1206 20:01:00.245218  115217 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1206 20:01:00.245244  115217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1206 20:01:00.245267  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHHostname
	I1206 20:01:00.245315  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 20:01:00.245333  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 20:01:00.245505  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 20:01:00.245639  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHUsername
	I1206 20:01:00.245737  115217 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/old-k8s-version-448851/id_rsa Username:docker}
	I1206 20:01:00.248492  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 20:01:00.249278  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 20:01:00.249313  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 20:01:00.249597  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHPort
	I1206 20:01:00.249811  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 20:01:00.249971  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHUsername
	I1206 20:01:00.250090  115217 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/old-k8s-version-448851/id_rsa Username:docker}
	I1206 20:01:00.259179  115217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41691
	I1206 20:01:00.259617  115217 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:01:00.260068  115217 main.go:141] libmachine: Using API Version  1
	I1206 20:01:00.260090  115217 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:01:00.260461  115217 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:01:00.260685  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetState
	I1206 20:01:00.262284  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .DriverName
	I1206 20:01:00.262586  115217 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 20:01:00.262604  115217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 20:01:00.262623  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHHostname
	I1206 20:01:00.265183  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 20:01:00.265643  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 20:01:00.265661  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 20:01:00.265890  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHPort
	I1206 20:01:00.266078  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 20:01:00.266240  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHUsername
	I1206 20:01:00.266941  115217 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/old-k8s-version-448851/id_rsa Username:docker}
	I1206 20:01:00.271403  115217 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-448851" context rescaled to 1 replicas
	I1206 20:01:00.271435  115217 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.33 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 20:01:00.273197  115217 out.go:177] * Verifying Kubernetes components...
	I1206 20:00:57.039307  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:59.039639  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:01:00.274454  115217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 20:01:00.597204  115217 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1206 20:01:00.597240  115217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1206 20:01:00.621632  115217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 20:01:00.623444  115217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 20:01:00.630185  115217 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-448851" to be "Ready" ...
	I1206 20:01:00.630280  115217 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 20:01:00.633576  115217 node_ready.go:49] node "old-k8s-version-448851" has status "Ready":"True"
	I1206 20:01:00.633603  115217 node_ready.go:38] duration metric: took 3.385927ms waiting for node "old-k8s-version-448851" to be "Ready" ...
	I1206 20:01:00.633616  115217 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 20:01:00.717216  115217 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1206 20:01:00.717273  115217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1206 20:01:00.735998  115217 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-2nncf" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:00.866186  115217 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 20:01:00.866218  115217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1206 20:01:01.066040  115217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 20:01:01.835164  115217 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.213479825s)
	I1206 20:01:01.835230  115217 main.go:141] libmachine: Making call to close driver server
	I1206 20:01:01.835243  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .Close
	I1206 20:01:01.835558  115217 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:01:01.835605  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | Closing plugin on server side
	I1206 20:01:01.835615  115217 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:01:01.835648  115217 main.go:141] libmachine: Making call to close driver server
	I1206 20:01:01.835663  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .Close
	I1206 20:01:01.835939  115217 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:01:01.835974  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | Closing plugin on server side
	I1206 20:01:01.835983  115217 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:01:01.872799  115217 main.go:141] libmachine: Making call to close driver server
	I1206 20:01:01.872835  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .Close
	I1206 20:01:01.873282  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | Closing plugin on server side
	I1206 20:01:01.873317  115217 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:01:01.873336  115217 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:01:02.258697  115217 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.635202106s)
	I1206 20:01:02.258754  115217 main.go:141] libmachine: Making call to close driver server
	I1206 20:01:02.258769  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .Close
	I1206 20:01:02.258773  115217 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.628450705s)
	I1206 20:01:02.258806  115217 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1206 20:01:02.259113  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | Closing plugin on server side
	I1206 20:01:02.260973  115217 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:01:02.261002  115217 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:01:02.261014  115217 main.go:141] libmachine: Making call to close driver server
	I1206 20:01:02.261025  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .Close
	I1206 20:01:02.261416  115217 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:01:02.261440  115217 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:01:02.261424  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | Closing plugin on server side
	I1206 20:01:02.375593  115217 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.309500554s)
	I1206 20:01:02.375659  115217 main.go:141] libmachine: Making call to close driver server
	I1206 20:01:02.375680  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .Close
	I1206 20:01:02.376064  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | Closing plugin on server side
	I1206 20:01:02.376155  115217 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:01:02.376168  115217 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:01:02.376185  115217 main.go:141] libmachine: Making call to close driver server
	I1206 20:01:02.376193  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .Close
	I1206 20:01:02.376522  115217 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:01:02.376532  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | Closing plugin on server side
	I1206 20:01:02.376543  115217 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:01:02.376559  115217 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-448851"
	I1206 20:01:02.378457  115217 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1206 20:01:02.380099  115217 addons.go:502] enable addons completed in 2.198162438s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1206 20:00:59.632971  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:01:00.133124  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:01:00.633148  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:01:01.132260  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:01:01.632323  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:01:02.132575  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:01:02.632268  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:01:03.132789  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:01:03.633155  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:01:04.132754  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:01:04.321130  115591 kubeadm.go:1088] duration metric: took 13.466729355s to wait for elevateKubeSystemPrivileges.
	I1206 20:01:04.321175  115591 kubeadm.go:406] StartCluster complete in 5m10.1110739s
	I1206 20:01:04.321200  115591 settings.go:142] acquiring lock: {Name:mkfeb988d43ca5824ac2b3af603600358ae0dd6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 20:01:04.321311  115591 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17740-63652/kubeconfig
	I1206 20:01:04.324158  115591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-63652/kubeconfig: {Name:mkb891a2b2c86b4a1b0f4bb8fd4e51233eb9c683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 20:01:04.324502  115591 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 20:01:04.324531  115591 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1206 20:01:04.324609  115591 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-209025"
	I1206 20:01:04.324633  115591 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-209025"
	W1206 20:01:04.324640  115591 addons.go:240] addon storage-provisioner should already be in state true
	I1206 20:01:04.324670  115591 addons.go:69] Setting default-storageclass=true in profile "embed-certs-209025"
	I1206 20:01:04.324699  115591 host.go:66] Checking if "embed-certs-209025" exists ...
	I1206 20:01:04.324702  115591 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-209025"
	I1206 20:01:04.324729  115591 config.go:182] Loaded profile config "embed-certs-209025": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 20:01:04.324799  115591 addons.go:69] Setting metrics-server=true in profile "embed-certs-209025"
	I1206 20:01:04.324813  115591 addons.go:231] Setting addon metrics-server=true in "embed-certs-209025"
	W1206 20:01:04.324820  115591 addons.go:240] addon metrics-server should already be in state true
	I1206 20:01:04.324858  115591 host.go:66] Checking if "embed-certs-209025" exists ...
	I1206 20:01:04.325100  115591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:01:04.325126  115591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:01:04.325127  115591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:01:04.325163  115591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:01:04.325191  115591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:01:04.325213  115591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:01:04.344127  115591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37257
	I1206 20:01:04.344361  115591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36921
	I1206 20:01:04.344866  115591 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:01:04.344978  115591 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:01:04.345615  115591 main.go:141] libmachine: Using API Version  1
	I1206 20:01:04.345635  115591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:01:04.345756  115591 main.go:141] libmachine: Using API Version  1
	I1206 20:01:04.345766  115591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:01:04.346201  115591 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:01:04.346772  115591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:01:04.346821  115591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:01:04.347367  115591 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:01:04.347741  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetState
	I1206 20:01:04.348264  115591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40295
	I1206 20:01:04.348754  115591 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:01:04.349655  115591 main.go:141] libmachine: Using API Version  1
	I1206 20:01:04.349676  115591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:01:04.350156  115591 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:01:04.352233  115591 addons.go:231] Setting addon default-storageclass=true in "embed-certs-209025"
	W1206 20:01:04.352257  115591 addons.go:240] addon default-storageclass should already be in state true
	I1206 20:01:04.352286  115591 host.go:66] Checking if "embed-certs-209025" exists ...
	I1206 20:01:04.352700  115591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:01:04.352734  115591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:01:04.353530  115591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:01:04.353563  115591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:01:04.365607  115591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40665
	I1206 20:01:04.366094  115591 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:01:04.366493  115591 main.go:141] libmachine: Using API Version  1
	I1206 20:01:04.366514  115591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:01:04.366780  115591 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:01:04.366908  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetState
	I1206 20:01:04.368611  115591 main.go:141] libmachine: (embed-certs-209025) Calling .DriverName
	I1206 20:01:04.370655  115591 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 20:01:04.372351  115591 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 20:01:04.372372  115591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33729
	I1206 20:01:04.372376  115591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 20:01:04.372402  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHHostname
	I1206 20:01:04.373021  115591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33983
	I1206 20:01:04.374446  115591 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:01:04.375104  115591 main.go:141] libmachine: Using API Version  1
	I1206 20:01:04.375126  115591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:01:04.375570  115591 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:01:04.375769  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetState
	I1206 20:01:04.376448  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 20:01:04.376851  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 20:01:04.376907  115591 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:01:04.377123  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 20:01:04.377377  115591 main.go:141] libmachine: (embed-certs-209025) Calling .DriverName
	I1206 20:01:04.377531  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHPort
	I1206 20:01:04.379514  115591 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1206 20:01:04.377862  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 20:01:04.378152  115591 main.go:141] libmachine: Using API Version  1
	I1206 20:01:04.381562  115591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:01:04.381682  115591 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1206 20:01:04.381700  115591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1206 20:01:04.381722  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHHostname
	I1206 20:01:04.382619  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHUsername
	I1206 20:01:04.382788  115591 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/embed-certs-209025/id_rsa Username:docker}
	I1206 20:01:04.383576  115591 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:01:04.384146  115591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:01:04.384176  115591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:01:04.386297  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 20:01:04.386684  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 20:01:04.386734  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 20:01:04.387477  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHPort
	I1206 20:01:04.387726  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 20:01:04.387913  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHUsername
	I1206 20:01:04.388055  115591 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/embed-certs-209025/id_rsa Username:docker}
	I1206 20:01:04.401629  115591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41467
	I1206 20:01:04.402214  115591 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:01:04.402804  115591 main.go:141] libmachine: Using API Version  1
	I1206 20:01:04.402826  115591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:01:04.403127  115591 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:01:04.403337  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetState
	I1206 20:01:04.405059  115591 main.go:141] libmachine: (embed-certs-209025) Calling .DriverName
	I1206 20:01:04.405404  115591 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 20:01:04.405427  115591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 20:01:04.405449  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHHostname
	I1206 20:01:04.408608  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 20:01:04.409145  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 20:01:04.409176  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 20:01:04.409443  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHPort
	I1206 20:01:04.409640  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 20:01:04.409860  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHUsername
	I1206 20:01:04.410016  115591 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/embed-certs-209025/id_rsa Username:docker}
	W1206 20:01:04.462788  115591 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "embed-certs-209025" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E1206 20:01:04.462843  115591 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I1206 20:01:04.462872  115591 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.164 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 20:01:04.464916  115591 out.go:177] * Verifying Kubernetes components...
	I1206 20:01:04.466388  115591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 20:01:01.039870  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:01:03.550944  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:01:05.231905  115078 pod_ready.go:81] duration metric: took 4m0.001038985s waiting for pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace to be "Ready" ...
	E1206 20:01:05.231950  115078 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1206 20:01:05.231962  115078 pod_ready.go:38] duration metric: took 4m4.801417566s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 20:01:05.231988  115078 api_server.go:52] waiting for apiserver process to appear ...
	I1206 20:01:05.232081  115078 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 20:01:05.232155  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 20:01:05.294538  115078 cri.go:89] found id: "f5b4ca951aec7e7912fa233772fa1e5a4cceffad95c297ea5b1b968ce835d2eb"
	I1206 20:01:05.294570  115078 cri.go:89] found id: ""
	I1206 20:01:05.294581  115078 logs.go:284] 1 containers: [f5b4ca951aec7e7912fa233772fa1e5a4cceffad95c297ea5b1b968ce835d2eb]
	I1206 20:01:05.294643  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:05.300221  115078 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 20:01:05.300300  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 20:01:05.359655  115078 cri.go:89] found id: "7633ca5afa8aeb44e4c450285edb3b4ca09bd881a87e959894d7740313a7d861"
	I1206 20:01:05.359685  115078 cri.go:89] found id: ""
	I1206 20:01:05.359696  115078 logs.go:284] 1 containers: [7633ca5afa8aeb44e4c450285edb3b4ca09bd881a87e959894d7740313a7d861]
	I1206 20:01:05.359759  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:05.364518  115078 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 20:01:05.364600  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 20:01:05.408448  115078 cri.go:89] found id: "93aee471c37fc8008cd5a0ce741944f249d9cbf7a5cbb62828369850236b0f07"
	I1206 20:01:05.408490  115078 cri.go:89] found id: ""
	I1206 20:01:05.408510  115078 logs.go:284] 1 containers: [93aee471c37fc8008cd5a0ce741944f249d9cbf7a5cbb62828369850236b0f07]
	I1206 20:01:05.408575  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:05.413345  115078 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 20:01:05.413428  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 20:01:05.462932  115078 cri.go:89] found id: "c00065611a1f7003ff8edd2ac629e3c6bbdfa4e1167b1dfd412ee16e5a9d3dcd"
	I1206 20:01:05.462960  115078 cri.go:89] found id: ""
	I1206 20:01:05.462971  115078 logs.go:284] 1 containers: [c00065611a1f7003ff8edd2ac629e3c6bbdfa4e1167b1dfd412ee16e5a9d3dcd]
	I1206 20:01:05.463034  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:05.468632  115078 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 20:01:05.468713  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 20:01:05.519690  115078 cri.go:89] found id: "0da9ad5d9749cba59bd11e3aa5dd332ad76b5bdd0fcde476ce333d4069f0e259"
	I1206 20:01:05.519720  115078 cri.go:89] found id: ""
	I1206 20:01:05.519731  115078 logs.go:284] 1 containers: [0da9ad5d9749cba59bd11e3aa5dd332ad76b5bdd0fcde476ce333d4069f0e259]
	I1206 20:01:05.519789  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:05.525847  115078 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 20:01:05.525933  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 20:01:05.580475  115078 cri.go:89] found id: "43c8e91cea581258f9bf13a71487b9ffadc02ac982f7ff08fa649092e171fa87"
	I1206 20:01:05.580537  115078 cri.go:89] found id: ""
	I1206 20:01:05.580550  115078 logs.go:284] 1 containers: [43c8e91cea581258f9bf13a71487b9ffadc02ac982f7ff08fa649092e171fa87]
	I1206 20:01:05.580623  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:05.585602  115078 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 20:01:05.585688  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 20:01:05.636350  115078 cri.go:89] found id: ""
	I1206 20:01:05.636383  115078 logs.go:284] 0 containers: []
	W1206 20:01:05.636394  115078 logs.go:286] No container was found matching "kindnet"
	I1206 20:01:05.636403  115078 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 20:01:05.636469  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 20:01:05.678819  115078 cri.go:89] found id: "ec1601a49c79cb7e81473ecbb6e4506b82fc3b4942e91392653a6991579c5617"
	I1206 20:01:05.678846  115078 cri.go:89] found id: "d07b3a050ef1964c57b85cc91a3d17646fef6d2298b91d72c007f432c51942c9"
	I1206 20:01:05.678853  115078 cri.go:89] found id: ""
	I1206 20:01:05.678863  115078 logs.go:284] 2 containers: [ec1601a49c79cb7e81473ecbb6e4506b82fc3b4942e91392653a6991579c5617 d07b3a050ef1964c57b85cc91a3d17646fef6d2298b91d72c007f432c51942c9]
	I1206 20:01:05.678929  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:05.683845  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:05.689989  115078 logs.go:123] Gathering logs for kube-scheduler [c00065611a1f7003ff8edd2ac629e3c6bbdfa4e1167b1dfd412ee16e5a9d3dcd] ...
	I1206 20:01:05.690021  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c00065611a1f7003ff8edd2ac629e3c6bbdfa4e1167b1dfd412ee16e5a9d3dcd"
	I1206 20:01:05.745510  115078 logs.go:123] Gathering logs for CRI-O ...
	I1206 20:01:05.745554  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 20:01:04.580869  115591 node_ready.go:35] waiting up to 6m0s for node "embed-certs-209025" to be "Ready" ...
	I1206 20:01:04.580933  115591 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 20:01:04.585219  115591 node_ready.go:49] node "embed-certs-209025" has status "Ready":"True"
	I1206 20:01:04.585267  115591 node_ready.go:38] duration metric: took 4.363508ms waiting for node "embed-certs-209025" to be "Ready" ...
	I1206 20:01:04.585281  115591 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 20:01:04.595166  115591 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-57z8q" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:04.611829  115591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 20:01:04.622127  115591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 20:01:04.628233  115591 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1206 20:01:04.628260  115591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1206 20:01:04.706473  115591 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1206 20:01:04.706498  115591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1206 20:01:04.790827  115591 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 20:01:04.790868  115591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1206 20:01:04.840367  115591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 20:01:06.312054  115591 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.73108071s)
	I1206 20:01:06.312092  115591 start.go:929] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1206 20:01:06.312099  115591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.700233834s)
	I1206 20:01:06.312147  115591 main.go:141] libmachine: Making call to close driver server
	I1206 20:01:06.312162  115591 main.go:141] libmachine: (embed-certs-209025) Calling .Close
	I1206 20:01:06.312503  115591 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:01:06.312519  115591 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:01:06.312531  115591 main.go:141] libmachine: Making call to close driver server
	I1206 20:01:06.312541  115591 main.go:141] libmachine: (embed-certs-209025) Calling .Close
	I1206 20:01:06.312895  115591 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:01:06.312985  115591 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:01:06.312952  115591 main.go:141] libmachine: (embed-certs-209025) DBG | Closing plugin on server side
	I1206 20:01:06.334314  115591 main.go:141] libmachine: Making call to close driver server
	I1206 20:01:06.334343  115591 main.go:141] libmachine: (embed-certs-209025) Calling .Close
	I1206 20:01:06.334719  115591 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:01:06.334742  115591 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:01:06.677046  115591 pod_ready.go:102] pod "coredns-5dd5756b68-57z8q" in "kube-system" namespace has status "Ready":"False"
	I1206 20:01:07.176051  115591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.553877678s)
	I1206 20:01:07.176112  115591 main.go:141] libmachine: Making call to close driver server
	I1206 20:01:07.176124  115591 main.go:141] libmachine: (embed-certs-209025) Calling .Close
	I1206 20:01:07.176520  115591 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:01:07.176551  115591 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:01:07.176570  115591 main.go:141] libmachine: Making call to close driver server
	I1206 20:01:07.176584  115591 main.go:141] libmachine: (embed-certs-209025) Calling .Close
	I1206 20:01:07.176859  115591 main.go:141] libmachine: (embed-certs-209025) DBG | Closing plugin on server side
	I1206 20:01:07.176852  115591 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:01:07.176884  115591 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:01:07.287377  115591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.446934189s)
	I1206 20:01:07.287525  115591 main.go:141] libmachine: Making call to close driver server
	I1206 20:01:07.287586  115591 main.go:141] libmachine: (embed-certs-209025) Calling .Close
	I1206 20:01:07.288055  115591 main.go:141] libmachine: (embed-certs-209025) DBG | Closing plugin on server side
	I1206 20:01:07.288055  115591 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:01:07.288082  115591 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:01:07.288096  115591 main.go:141] libmachine: Making call to close driver server
	I1206 20:01:07.288105  115591 main.go:141] libmachine: (embed-certs-209025) Calling .Close
	I1206 20:01:07.288358  115591 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:01:07.288372  115591 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:01:07.288384  115591 addons.go:467] Verifying addon metrics-server=true in "embed-certs-209025"
	I1206 20:01:07.291120  115591 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1206 20:01:03.100131  115217 pod_ready.go:102] pod "coredns-5644d7b6d9-2nncf" in "kube-system" namespace has status "Ready":"False"
	I1206 20:01:05.107571  115217 pod_ready.go:102] pod "coredns-5644d7b6d9-2nncf" in "kube-system" namespace has status "Ready":"False"
	I1206 20:01:07.599078  115217 pod_ready.go:102] pod "coredns-5644d7b6d9-2nncf" in "kube-system" namespace has status "Ready":"False"
	I1206 20:01:07.292151  115591 addons.go:502] enable addons completed in 2.967619291s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1206 20:01:09.122709  115591 pod_ready.go:102] pod "coredns-5dd5756b68-57z8q" in "kube-system" namespace has status "Ready":"False"
	I1206 20:01:06.258156  115078 logs.go:123] Gathering logs for container status ...
	I1206 20:01:06.258193  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 20:01:06.321049  115078 logs.go:123] Gathering logs for kubelet ...
	I1206 20:01:06.321084  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 20:01:06.376243  115078 logs.go:123] Gathering logs for etcd [7633ca5afa8aeb44e4c450285edb3b4ca09bd881a87e959894d7740313a7d861] ...
	I1206 20:01:06.376281  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7633ca5afa8aeb44e4c450285edb3b4ca09bd881a87e959894d7740313a7d861"
	I1206 20:01:06.441701  115078 logs.go:123] Gathering logs for coredns [93aee471c37fc8008cd5a0ce741944f249d9cbf7a5cbb62828369850236b0f07] ...
	I1206 20:01:06.441742  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93aee471c37fc8008cd5a0ce741944f249d9cbf7a5cbb62828369850236b0f07"
	I1206 20:01:06.493399  115078 logs.go:123] Gathering logs for kube-proxy [0da9ad5d9749cba59bd11e3aa5dd332ad76b5bdd0fcde476ce333d4069f0e259] ...
	I1206 20:01:06.493440  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0da9ad5d9749cba59bd11e3aa5dd332ad76b5bdd0fcde476ce333d4069f0e259"
	I1206 20:01:06.545681  115078 logs.go:123] Gathering logs for storage-provisioner [d07b3a050ef1964c57b85cc91a3d17646fef6d2298b91d72c007f432c51942c9] ...
	I1206 20:01:06.545717  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d07b3a050ef1964c57b85cc91a3d17646fef6d2298b91d72c007f432c51942c9"
	I1206 20:01:06.602830  115078 logs.go:123] Gathering logs for dmesg ...
	I1206 20:01:06.602864  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 20:01:06.618874  115078 logs.go:123] Gathering logs for kube-controller-manager [43c8e91cea581258f9bf13a71487b9ffadc02ac982f7ff08fa649092e171fa87] ...
	I1206 20:01:06.618903  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 43c8e91cea581258f9bf13a71487b9ffadc02ac982f7ff08fa649092e171fa87"
	I1206 20:01:06.694329  115078 logs.go:123] Gathering logs for storage-provisioner [ec1601a49c79cb7e81473ecbb6e4506b82fc3b4942e91392653a6991579c5617] ...
	I1206 20:01:06.694375  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec1601a49c79cb7e81473ecbb6e4506b82fc3b4942e91392653a6991579c5617"
	I1206 20:01:06.748217  115078 logs.go:123] Gathering logs for describe nodes ...
	I1206 20:01:06.748255  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1206 20:01:06.933616  115078 logs.go:123] Gathering logs for kube-apiserver [f5b4ca951aec7e7912fa233772fa1e5a4cceffad95c297ea5b1b968ce835d2eb] ...
	I1206 20:01:06.933655  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5b4ca951aec7e7912fa233772fa1e5a4cceffad95c297ea5b1b968ce835d2eb"
	I1206 20:01:09.511340  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 20:01:09.530228  115078 api_server.go:72] duration metric: took 4m16.464196787s to wait for apiserver process to appear ...
	I1206 20:01:09.530254  115078 api_server.go:88] waiting for apiserver healthz status ...
	I1206 20:01:09.530295  115078 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 20:01:09.530357  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 20:01:09.574265  115078 cri.go:89] found id: "f5b4ca951aec7e7912fa233772fa1e5a4cceffad95c297ea5b1b968ce835d2eb"
	I1206 20:01:09.574301  115078 cri.go:89] found id: ""
	I1206 20:01:09.574313  115078 logs.go:284] 1 containers: [f5b4ca951aec7e7912fa233772fa1e5a4cceffad95c297ea5b1b968ce835d2eb]
	I1206 20:01:09.574377  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:09.579240  115078 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 20:01:09.579310  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 20:01:09.622512  115078 cri.go:89] found id: "7633ca5afa8aeb44e4c450285edb3b4ca09bd881a87e959894d7740313a7d861"
	I1206 20:01:09.622540  115078 cri.go:89] found id: ""
	I1206 20:01:09.622551  115078 logs.go:284] 1 containers: [7633ca5afa8aeb44e4c450285edb3b4ca09bd881a87e959894d7740313a7d861]
	I1206 20:01:09.622619  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:09.627770  115078 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 20:01:09.627847  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 20:01:09.675976  115078 cri.go:89] found id: "93aee471c37fc8008cd5a0ce741944f249d9cbf7a5cbb62828369850236b0f07"
	I1206 20:01:09.676007  115078 cri.go:89] found id: ""
	I1206 20:01:09.676018  115078 logs.go:284] 1 containers: [93aee471c37fc8008cd5a0ce741944f249d9cbf7a5cbb62828369850236b0f07]
	I1206 20:01:09.676082  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:09.680750  115078 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 20:01:09.680824  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 20:01:09.721081  115078 cri.go:89] found id: "c00065611a1f7003ff8edd2ac629e3c6bbdfa4e1167b1dfd412ee16e5a9d3dcd"
	I1206 20:01:09.721108  115078 cri.go:89] found id: ""
	I1206 20:01:09.721119  115078 logs.go:284] 1 containers: [c00065611a1f7003ff8edd2ac629e3c6bbdfa4e1167b1dfd412ee16e5a9d3dcd]
	I1206 20:01:09.721181  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:09.725501  115078 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 20:01:09.725568  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 20:01:09.777674  115078 cri.go:89] found id: "0da9ad5d9749cba59bd11e3aa5dd332ad76b5bdd0fcde476ce333d4069f0e259"
	I1206 20:01:09.777700  115078 cri.go:89] found id: ""
	I1206 20:01:09.777709  115078 logs.go:284] 1 containers: [0da9ad5d9749cba59bd11e3aa5dd332ad76b5bdd0fcde476ce333d4069f0e259]
	I1206 20:01:09.777767  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:09.782475  115078 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 20:01:09.782558  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 20:01:09.833889  115078 cri.go:89] found id: "43c8e91cea581258f9bf13a71487b9ffadc02ac982f7ff08fa649092e171fa87"
	I1206 20:01:09.833916  115078 cri.go:89] found id: ""
	I1206 20:01:09.833926  115078 logs.go:284] 1 containers: [43c8e91cea581258f9bf13a71487b9ffadc02ac982f7ff08fa649092e171fa87]
	I1206 20:01:09.833985  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:09.838897  115078 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 20:01:09.838977  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 20:01:09.880892  115078 cri.go:89] found id: ""
	I1206 20:01:09.880923  115078 logs.go:284] 0 containers: []
	W1206 20:01:09.880934  115078 logs.go:286] No container was found matching "kindnet"
	I1206 20:01:09.880943  115078 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 20:01:09.881011  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 20:01:09.924025  115078 cri.go:89] found id: "ec1601a49c79cb7e81473ecbb6e4506b82fc3b4942e91392653a6991579c5617"
	I1206 20:01:09.924058  115078 cri.go:89] found id: "d07b3a050ef1964c57b85cc91a3d17646fef6d2298b91d72c007f432c51942c9"
	I1206 20:01:09.924065  115078 cri.go:89] found id: ""
	I1206 20:01:09.924075  115078 logs.go:284] 2 containers: [ec1601a49c79cb7e81473ecbb6e4506b82fc3b4942e91392653a6991579c5617 d07b3a050ef1964c57b85cc91a3d17646fef6d2298b91d72c007f432c51942c9]
	I1206 20:01:09.924142  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:09.928667  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:09.933112  115078 logs.go:123] Gathering logs for dmesg ...
	I1206 20:01:09.933134  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 20:01:09.949212  115078 logs.go:123] Gathering logs for etcd [7633ca5afa8aeb44e4c450285edb3b4ca09bd881a87e959894d7740313a7d861] ...
	I1206 20:01:09.949254  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7633ca5afa8aeb44e4c450285edb3b4ca09bd881a87e959894d7740313a7d861"
	I1206 20:01:09.996227  115078 logs.go:123] Gathering logs for container status ...
	I1206 20:01:09.996261  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 20:01:10.046607  115078 logs.go:123] Gathering logs for kubelet ...
	I1206 20:01:10.046645  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 20:01:10.102171  115078 logs.go:123] Gathering logs for kube-controller-manager [43c8e91cea581258f9bf13a71487b9ffadc02ac982f7ff08fa649092e171fa87] ...
	I1206 20:01:10.102214  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 43c8e91cea581258f9bf13a71487b9ffadc02ac982f7ff08fa649092e171fa87"
	I1206 20:01:10.160600  115078 logs.go:123] Gathering logs for storage-provisioner [ec1601a49c79cb7e81473ecbb6e4506b82fc3b4942e91392653a6991579c5617] ...
	I1206 20:01:10.160641  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec1601a49c79cb7e81473ecbb6e4506b82fc3b4942e91392653a6991579c5617"
	I1206 20:01:10.203673  115078 logs.go:123] Gathering logs for CRI-O ...
	I1206 20:01:10.203709  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 20:01:10.681783  115078 logs.go:123] Gathering logs for describe nodes ...
	I1206 20:01:10.681824  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1206 20:01:10.813061  115078 logs.go:123] Gathering logs for kube-proxy [0da9ad5d9749cba59bd11e3aa5dd332ad76b5bdd0fcde476ce333d4069f0e259] ...
	I1206 20:01:10.813102  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0da9ad5d9749cba59bd11e3aa5dd332ad76b5bdd0fcde476ce333d4069f0e259"
	I1206 20:01:10.857895  115078 logs.go:123] Gathering logs for storage-provisioner [d07b3a050ef1964c57b85cc91a3d17646fef6d2298b91d72c007f432c51942c9] ...
	I1206 20:01:10.857930  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d07b3a050ef1964c57b85cc91a3d17646fef6d2298b91d72c007f432c51942c9"
	I1206 20:01:10.904589  115078 logs.go:123] Gathering logs for kube-apiserver [f5b4ca951aec7e7912fa233772fa1e5a4cceffad95c297ea5b1b968ce835d2eb] ...
	I1206 20:01:10.904625  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5b4ca951aec7e7912fa233772fa1e5a4cceffad95c297ea5b1b968ce835d2eb"
	I1206 20:01:10.957570  115078 logs.go:123] Gathering logs for kube-scheduler [c00065611a1f7003ff8edd2ac629e3c6bbdfa4e1167b1dfd412ee16e5a9d3dcd] ...
	I1206 20:01:10.957608  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c00065611a1f7003ff8edd2ac629e3c6bbdfa4e1167b1dfd412ee16e5a9d3dcd"
	I1206 20:01:09.624997  115591 pod_ready.go:92] pod "coredns-5dd5756b68-57z8q" in "kube-system" namespace has status "Ready":"True"
	I1206 20:01:09.625025  115591 pod_ready.go:81] duration metric: took 5.029829059s waiting for pod "coredns-5dd5756b68-57z8q" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:09.625038  115591 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-8lsns" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:09.632534  115591 pod_ready.go:92] pod "coredns-5dd5756b68-8lsns" in "kube-system" namespace has status "Ready":"True"
	I1206 20:01:09.632561  115591 pod_ready.go:81] duration metric: took 7.514952ms waiting for pod "coredns-5dd5756b68-8lsns" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:09.632574  115591 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:09.642077  115591 pod_ready.go:92] pod "etcd-embed-certs-209025" in "kube-system" namespace has status "Ready":"True"
	I1206 20:01:09.642107  115591 pod_ready.go:81] duration metric: took 9.52505ms waiting for pod "etcd-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:09.642121  115591 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:09.648636  115591 pod_ready.go:92] pod "kube-apiserver-embed-certs-209025" in "kube-system" namespace has status "Ready":"True"
	I1206 20:01:09.648658  115591 pod_ready.go:81] duration metric: took 6.530394ms waiting for pod "kube-apiserver-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:09.648667  115591 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:09.656534  115591 pod_ready.go:92] pod "kube-controller-manager-embed-certs-209025" in "kube-system" namespace has status "Ready":"True"
	I1206 20:01:09.656561  115591 pod_ready.go:81] duration metric: took 7.887248ms waiting for pod "kube-controller-manager-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:09.656573  115591 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nf2cw" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:10.019281  115591 pod_ready.go:92] pod "kube-proxy-nf2cw" in "kube-system" namespace has status "Ready":"True"
	I1206 20:01:10.019310  115591 pod_ready.go:81] duration metric: took 362.727602ms waiting for pod "kube-proxy-nf2cw" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:10.019323  115591 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:10.419938  115591 pod_ready.go:92] pod "kube-scheduler-embed-certs-209025" in "kube-system" namespace has status "Ready":"True"
	I1206 20:01:10.419971  115591 pod_ready.go:81] duration metric: took 400.640145ms waiting for pod "kube-scheduler-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:10.419982  115591 pod_ready.go:38] duration metric: took 5.834689614s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 20:01:10.420000  115591 api_server.go:52] waiting for apiserver process to appear ...
	I1206 20:01:10.420062  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 20:01:10.436691  115591 api_server.go:72] duration metric: took 5.973781556s to wait for apiserver process to appear ...
	I1206 20:01:10.436723  115591 api_server.go:88] waiting for apiserver healthz status ...
	I1206 20:01:10.436746  115591 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8443/healthz ...
	I1206 20:01:10.442876  115591 api_server.go:279] https://192.168.50.164:8443/healthz returned 200:
	ok
	I1206 20:01:10.444774  115591 api_server.go:141] control plane version: v1.28.4
	I1206 20:01:10.444798  115591 api_server.go:131] duration metric: took 8.067787ms to wait for apiserver health ...
	I1206 20:01:10.444808  115591 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 20:01:10.624219  115591 system_pods.go:59] 9 kube-system pods found
	I1206 20:01:10.624251  115591 system_pods.go:61] "coredns-5dd5756b68-57z8q" [24c81a49-d80e-47df-86d2-0056ccc25858] Running
	I1206 20:01:10.624256  115591 system_pods.go:61] "coredns-5dd5756b68-8lsns" [14c5f16e-0c30-4602-b772-c6e0c8a577a8] Running
	I1206 20:01:10.624260  115591 system_pods.go:61] "etcd-embed-certs-209025" [e352dba2-c22b-4b21-9cb7-d641d29307a0] Running
	I1206 20:01:10.624264  115591 system_pods.go:61] "kube-apiserver-embed-certs-209025" [b4bfe0d1-0f1f-4e5e-96a4-94ec19cc1ab4] Running
	I1206 20:01:10.624268  115591 system_pods.go:61] "kube-controller-manager-embed-certs-209025" [1e9819fc-0187-4410-97f5-a517fb6b6595] Running
	I1206 20:01:10.624272  115591 system_pods.go:61] "kube-proxy-nf2cw" [5e49b3f8-7eee-4c04-ae22-75ccd216bb27] Running
	I1206 20:01:10.624275  115591 system_pods.go:61] "kube-scheduler-embed-certs-209025" [cc5d4d6f-515d-48b9-8d6f-83c33b0fa037] Running
	I1206 20:01:10.624282  115591 system_pods.go:61] "metrics-server-57f55c9bc5-5qxxj" [4eaddb4b-aec0-4cc7-b467-bb882bcba8a0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:10.624286  115591 system_pods.go:61] "storage-provisioner" [2417fc35-04fd-4dcf-9d16-2649a0d3bb3b] Running
	I1206 20:01:10.624296  115591 system_pods.go:74] duration metric: took 179.481721ms to wait for pod list to return data ...
	I1206 20:01:10.624306  115591 default_sa.go:34] waiting for default service account to be created ...
	I1206 20:01:10.818715  115591 default_sa.go:45] found service account: "default"
	I1206 20:01:10.818741  115591 default_sa.go:55] duration metric: took 194.428895ms for default service account to be created ...
	I1206 20:01:10.818750  115591 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 20:01:11.022686  115591 system_pods.go:86] 9 kube-system pods found
	I1206 20:01:11.022713  115591 system_pods.go:89] "coredns-5dd5756b68-57z8q" [24c81a49-d80e-47df-86d2-0056ccc25858] Running
	I1206 20:01:11.022718  115591 system_pods.go:89] "coredns-5dd5756b68-8lsns" [14c5f16e-0c30-4602-b772-c6e0c8a577a8] Running
	I1206 20:01:11.022722  115591 system_pods.go:89] "etcd-embed-certs-209025" [e352dba2-c22b-4b21-9cb7-d641d29307a0] Running
	I1206 20:01:11.022726  115591 system_pods.go:89] "kube-apiserver-embed-certs-209025" [b4bfe0d1-0f1f-4e5e-96a4-94ec19cc1ab4] Running
	I1206 20:01:11.022730  115591 system_pods.go:89] "kube-controller-manager-embed-certs-209025" [1e9819fc-0187-4410-97f5-a517fb6b6595] Running
	I1206 20:01:11.022734  115591 system_pods.go:89] "kube-proxy-nf2cw" [5e49b3f8-7eee-4c04-ae22-75ccd216bb27] Running
	I1206 20:01:11.022738  115591 system_pods.go:89] "kube-scheduler-embed-certs-209025" [cc5d4d6f-515d-48b9-8d6f-83c33b0fa037] Running
	I1206 20:01:11.022744  115591 system_pods.go:89] "metrics-server-57f55c9bc5-5qxxj" [4eaddb4b-aec0-4cc7-b467-bb882bcba8a0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:11.022750  115591 system_pods.go:89] "storage-provisioner" [2417fc35-04fd-4dcf-9d16-2649a0d3bb3b] Running
	I1206 20:01:11.022762  115591 system_pods.go:126] duration metric: took 204.004835ms to wait for k8s-apps to be running ...
	I1206 20:01:11.022774  115591 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 20:01:11.022824  115591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 20:01:11.041212  115591 system_svc.go:56] duration metric: took 18.424469ms WaitForService to wait for kubelet.
	I1206 20:01:11.041256  115591 kubeadm.go:581] duration metric: took 6.578354937s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1206 20:01:11.041291  115591 node_conditions.go:102] verifying NodePressure condition ...
	I1206 20:01:11.219045  115591 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1206 20:01:11.219079  115591 node_conditions.go:123] node cpu capacity is 2
	I1206 20:01:11.219094  115591 node_conditions.go:105] duration metric: took 177.793737ms to run NodePressure ...
	I1206 20:01:11.219107  115591 start.go:228] waiting for startup goroutines ...
	I1206 20:01:11.219113  115591 start.go:233] waiting for cluster config update ...
	I1206 20:01:11.219125  115591 start.go:242] writing updated cluster config ...
	I1206 20:01:11.219482  115591 ssh_runner.go:195] Run: rm -f paused
	I1206 20:01:11.275863  115591 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1206 20:01:11.278074  115591 out.go:177] * Done! kubectl is now configured to use "embed-certs-209025" cluster and "default" namespace by default
	I1206 20:01:09.099590  115217 pod_ready.go:92] pod "coredns-5644d7b6d9-2nncf" in "kube-system" namespace has status "Ready":"True"
	I1206 20:01:09.099616  115217 pod_ready.go:81] duration metric: took 8.363590309s waiting for pod "coredns-5644d7b6d9-2nncf" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:09.099626  115217 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-f627j" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:09.103452  115217 pod_ready.go:97] error getting pod "coredns-5644d7b6d9-f627j" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-f627j" not found
	I1206 20:01:09.103485  115217 pod_ready.go:81] duration metric: took 3.845902ms waiting for pod "coredns-5644d7b6d9-f627j" in "kube-system" namespace to be "Ready" ...
	E1206 20:01:09.103499  115217 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5644d7b6d9-f627j" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-f627j" not found
	I1206 20:01:09.103507  115217 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wvqmw" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:09.110700  115217 pod_ready.go:92] pod "kube-proxy-wvqmw" in "kube-system" namespace has status "Ready":"True"
	I1206 20:01:09.110721  115217 pod_ready.go:81] duration metric: took 7.207091ms waiting for pod "kube-proxy-wvqmw" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:09.110729  115217 pod_ready.go:38] duration metric: took 8.477100108s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 20:01:09.110744  115217 api_server.go:52] waiting for apiserver process to appear ...
	I1206 20:01:09.110791  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 20:01:09.127244  115217 api_server.go:72] duration metric: took 8.855777965s to wait for apiserver process to appear ...
	I1206 20:01:09.127272  115217 api_server.go:88] waiting for apiserver healthz status ...
	I1206 20:01:09.127290  115217 api_server.go:253] Checking apiserver healthz at https://192.168.61.33:8443/healthz ...
	I1206 20:01:09.134411  115217 api_server.go:279] https://192.168.61.33:8443/healthz returned 200:
	ok
	I1206 20:01:09.135553  115217 api_server.go:141] control plane version: v1.16.0
	I1206 20:01:09.135578  115217 api_server.go:131] duration metric: took 8.298936ms to wait for apiserver health ...
	I1206 20:01:09.135589  115217 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 20:01:09.140145  115217 system_pods.go:59] 4 kube-system pods found
	I1206 20:01:09.140167  115217 system_pods.go:61] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:09.140172  115217 system_pods.go:61] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:09.140178  115217 system_pods.go:61] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:09.140183  115217 system_pods.go:61] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:09.140191  115217 system_pods.go:74] duration metric: took 4.595695ms to wait for pod list to return data ...
	I1206 20:01:09.140198  115217 default_sa.go:34] waiting for default service account to be created ...
	I1206 20:01:09.142852  115217 default_sa.go:45] found service account: "default"
	I1206 20:01:09.142877  115217 default_sa.go:55] duration metric: took 2.67139ms for default service account to be created ...
	I1206 20:01:09.142888  115217 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 20:01:09.145800  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:09.145822  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:09.145827  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:09.145833  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:09.145838  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:09.145856  115217 retry.go:31] will retry after 199.361191ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:09.351430  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:09.351475  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:09.351485  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:09.351497  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:09.351504  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:09.351529  115217 retry.go:31] will retry after 239.084983ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:09.595441  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:09.595479  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:09.595487  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:09.595498  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:09.595506  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:09.595528  115217 retry.go:31] will retry after 380.909676ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:09.982061  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:09.982088  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:09.982093  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:09.982101  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:09.982115  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:09.982133  115217 retry.go:31] will retry after 451.472574ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:10.439270  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:10.439303  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:10.439311  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:10.439321  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:10.439328  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:10.439350  115217 retry.go:31] will retry after 654.845182ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:11.101088  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:11.101129  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:11.101137  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:11.101147  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:11.101155  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:11.101178  115217 retry.go:31] will retry after 650.939663ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:11.757024  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:11.757053  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:11.757058  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:11.757065  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:11.757070  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:11.757088  115217 retry.go:31] will retry after 828.555469ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:12.591156  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:12.591193  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:12.591209  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:12.591220  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:12.591227  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:12.591254  115217 retry.go:31] will retry after 1.26518336s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:11.000472  115078 logs.go:123] Gathering logs for coredns [93aee471c37fc8008cd5a0ce741944f249d9cbf7a5cbb62828369850236b0f07] ...
	I1206 20:01:11.000505  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93aee471c37fc8008cd5a0ce741944f249d9cbf7a5cbb62828369850236b0f07"
	I1206 20:01:13.545345  115078 api_server.go:253] Checking apiserver healthz at https://192.168.39.5:8443/healthz ...
	I1206 20:01:13.551262  115078 api_server.go:279] https://192.168.39.5:8443/healthz returned 200:
	ok
	I1206 20:01:13.553129  115078 api_server.go:141] control plane version: v1.29.0-rc.1
	I1206 20:01:13.553161  115078 api_server.go:131] duration metric: took 4.022898619s to wait for apiserver health ...
	I1206 20:01:13.553173  115078 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 20:01:13.553204  115078 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 20:01:13.553287  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 20:01:13.619861  115078 cri.go:89] found id: "f5b4ca951aec7e7912fa233772fa1e5a4cceffad95c297ea5b1b968ce835d2eb"
	I1206 20:01:13.619892  115078 cri.go:89] found id: ""
	I1206 20:01:13.619903  115078 logs.go:284] 1 containers: [f5b4ca951aec7e7912fa233772fa1e5a4cceffad95c297ea5b1b968ce835d2eb]
	I1206 20:01:13.619994  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:13.625028  115078 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 20:01:13.625099  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 20:01:13.667275  115078 cri.go:89] found id: "7633ca5afa8aeb44e4c450285edb3b4ca09bd881a87e959894d7740313a7d861"
	I1206 20:01:13.667300  115078 cri.go:89] found id: ""
	I1206 20:01:13.667309  115078 logs.go:284] 1 containers: [7633ca5afa8aeb44e4c450285edb3b4ca09bd881a87e959894d7740313a7d861]
	I1206 20:01:13.667378  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:13.671673  115078 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 20:01:13.671740  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 20:01:13.713319  115078 cri.go:89] found id: "93aee471c37fc8008cd5a0ce741944f249d9cbf7a5cbb62828369850236b0f07"
	I1206 20:01:13.713351  115078 cri.go:89] found id: ""
	I1206 20:01:13.713361  115078 logs.go:284] 1 containers: [93aee471c37fc8008cd5a0ce741944f249d9cbf7a5cbb62828369850236b0f07]
	I1206 20:01:13.713428  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:13.718155  115078 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 20:01:13.718219  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 20:01:13.758383  115078 cri.go:89] found id: "c00065611a1f7003ff8edd2ac629e3c6bbdfa4e1167b1dfd412ee16e5a9d3dcd"
	I1206 20:01:13.758414  115078 cri.go:89] found id: ""
	I1206 20:01:13.758424  115078 logs.go:284] 1 containers: [c00065611a1f7003ff8edd2ac629e3c6bbdfa4e1167b1dfd412ee16e5a9d3dcd]
	I1206 20:01:13.758488  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:13.762747  115078 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 20:01:13.762826  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 20:01:13.803602  115078 cri.go:89] found id: "0da9ad5d9749cba59bd11e3aa5dd332ad76b5bdd0fcde476ce333d4069f0e259"
	I1206 20:01:13.803627  115078 cri.go:89] found id: ""
	I1206 20:01:13.803635  115078 logs.go:284] 1 containers: [0da9ad5d9749cba59bd11e3aa5dd332ad76b5bdd0fcde476ce333d4069f0e259]
	I1206 20:01:13.803685  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:13.808083  115078 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 20:01:13.808160  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 20:01:13.852504  115078 cri.go:89] found id: "43c8e91cea581258f9bf13a71487b9ffadc02ac982f7ff08fa649092e171fa87"
	I1206 20:01:13.852531  115078 cri.go:89] found id: ""
	I1206 20:01:13.852539  115078 logs.go:284] 1 containers: [43c8e91cea581258f9bf13a71487b9ffadc02ac982f7ff08fa649092e171fa87]
	I1206 20:01:13.852598  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:13.857213  115078 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 20:01:13.857322  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 20:01:13.896981  115078 cri.go:89] found id: ""
	I1206 20:01:13.897023  115078 logs.go:284] 0 containers: []
	W1206 20:01:13.897035  115078 logs.go:286] No container was found matching "kindnet"
	I1206 20:01:13.897044  115078 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 20:01:13.897110  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 20:01:13.940969  115078 cri.go:89] found id: "ec1601a49c79cb7e81473ecbb6e4506b82fc3b4942e91392653a6991579c5617"
	I1206 20:01:13.940996  115078 cri.go:89] found id: "d07b3a050ef1964c57b85cc91a3d17646fef6d2298b91d72c007f432c51942c9"
	I1206 20:01:13.941004  115078 cri.go:89] found id: ""
	I1206 20:01:13.941013  115078 logs.go:284] 2 containers: [ec1601a49c79cb7e81473ecbb6e4506b82fc3b4942e91392653a6991579c5617 d07b3a050ef1964c57b85cc91a3d17646fef6d2298b91d72c007f432c51942c9]
	I1206 20:01:13.941075  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:13.945508  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:13.949933  115078 logs.go:123] Gathering logs for kube-scheduler [c00065611a1f7003ff8edd2ac629e3c6bbdfa4e1167b1dfd412ee16e5a9d3dcd] ...
	I1206 20:01:13.949961  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c00065611a1f7003ff8edd2ac629e3c6bbdfa4e1167b1dfd412ee16e5a9d3dcd"
	I1206 20:01:13.986034  115078 logs.go:123] Gathering logs for kube-controller-manager [43c8e91cea581258f9bf13a71487b9ffadc02ac982f7ff08fa649092e171fa87] ...
	I1206 20:01:13.986065  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 43c8e91cea581258f9bf13a71487b9ffadc02ac982f7ff08fa649092e171fa87"
	I1206 20:01:14.045155  115078 logs.go:123] Gathering logs for storage-provisioner [ec1601a49c79cb7e81473ecbb6e4506b82fc3b4942e91392653a6991579c5617] ...
	I1206 20:01:14.045197  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec1601a49c79cb7e81473ecbb6e4506b82fc3b4942e91392653a6991579c5617"
	I1206 20:01:14.091205  115078 logs.go:123] Gathering logs for storage-provisioner [d07b3a050ef1964c57b85cc91a3d17646fef6d2298b91d72c007f432c51942c9] ...
	I1206 20:01:14.091240  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d07b3a050ef1964c57b85cc91a3d17646fef6d2298b91d72c007f432c51942c9"
	I1206 20:01:14.130184  115078 logs.go:123] Gathering logs for container status ...
	I1206 20:01:14.130221  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 20:01:14.176981  115078 logs.go:123] Gathering logs for dmesg ...
	I1206 20:01:14.177024  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 20:01:14.191755  115078 logs.go:123] Gathering logs for describe nodes ...
	I1206 20:01:14.191796  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1206 20:01:14.316375  115078 logs.go:123] Gathering logs for etcd [7633ca5afa8aeb44e4c450285edb3b4ca09bd881a87e959894d7740313a7d861] ...
	I1206 20:01:14.316413  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7633ca5afa8aeb44e4c450285edb3b4ca09bd881a87e959894d7740313a7d861"
	I1206 20:01:14.359700  115078 logs.go:123] Gathering logs for kubelet ...
	I1206 20:01:14.359746  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 20:01:14.415906  115078 logs.go:123] Gathering logs for kube-apiserver [f5b4ca951aec7e7912fa233772fa1e5a4cceffad95c297ea5b1b968ce835d2eb] ...
	I1206 20:01:14.415952  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5b4ca951aec7e7912fa233772fa1e5a4cceffad95c297ea5b1b968ce835d2eb"
	I1206 20:01:14.471453  115078 logs.go:123] Gathering logs for kube-proxy [0da9ad5d9749cba59bd11e3aa5dd332ad76b5bdd0fcde476ce333d4069f0e259] ...
	I1206 20:01:14.471496  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0da9ad5d9749cba59bd11e3aa5dd332ad76b5bdd0fcde476ce333d4069f0e259"
	I1206 20:01:14.520012  115078 logs.go:123] Gathering logs for coredns [93aee471c37fc8008cd5a0ce741944f249d9cbf7a5cbb62828369850236b0f07] ...
	I1206 20:01:14.520051  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93aee471c37fc8008cd5a0ce741944f249d9cbf7a5cbb62828369850236b0f07"
	I1206 20:01:14.567445  115078 logs.go:123] Gathering logs for CRI-O ...
	I1206 20:01:14.567482  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 20:01:17.434636  115078 system_pods.go:59] 8 kube-system pods found
	I1206 20:01:17.434671  115078 system_pods.go:61] "coredns-76f75df574-h9pkz" [05501356-bf9b-4a99-a1b9-40d0caef38db] Running
	I1206 20:01:17.434676  115078 system_pods.go:61] "etcd-no-preload-989559" [6c1cb748-a6a8-4583-b8fd-adf37e05b771] Running
	I1206 20:01:17.434680  115078 system_pods.go:61] "kube-apiserver-no-preload-989559" [51d8b7c6-0cef-4832-96b2-5040c0725310] Running
	I1206 20:01:17.434685  115078 system_pods.go:61] "kube-controller-manager-no-preload-989559" [cc8dfb88-9990-488f-9150-5c643143dcf1] Running
	I1206 20:01:17.434688  115078 system_pods.go:61] "kube-proxy-zgqvt" [550b2491-c14f-47c4-82d5-1301fa351305] Running
	I1206 20:01:17.434692  115078 system_pods.go:61] "kube-scheduler-no-preload-989559" [53a5031e-51aa-4867-88ff-1c7972a0cfa7] Running
	I1206 20:01:17.434700  115078 system_pods.go:61] "metrics-server-57f55c9bc5-vz7qc" [97c1bcd2-eabc-4029-bb02-5bbfd4d96c0f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:17.434706  115078 system_pods.go:61] "storage-provisioner" [c4d98de3-12ec-47f6-a6a6-f1dc61b479be] Running
	I1206 20:01:17.434714  115078 system_pods.go:74] duration metric: took 3.881535405s to wait for pod list to return data ...
	I1206 20:01:17.434724  115078 default_sa.go:34] waiting for default service account to be created ...
	I1206 20:01:17.437744  115078 default_sa.go:45] found service account: "default"
	I1206 20:01:17.437770  115078 default_sa.go:55] duration metric: took 3.038532ms for default service account to be created ...
	I1206 20:01:17.437780  115078 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 20:01:17.444539  115078 system_pods.go:86] 8 kube-system pods found
	I1206 20:01:17.444567  115078 system_pods.go:89] "coredns-76f75df574-h9pkz" [05501356-bf9b-4a99-a1b9-40d0caef38db] Running
	I1206 20:01:17.444572  115078 system_pods.go:89] "etcd-no-preload-989559" [6c1cb748-a6a8-4583-b8fd-adf37e05b771] Running
	I1206 20:01:17.444577  115078 system_pods.go:89] "kube-apiserver-no-preload-989559" [51d8b7c6-0cef-4832-96b2-5040c0725310] Running
	I1206 20:01:17.444583  115078 system_pods.go:89] "kube-controller-manager-no-preload-989559" [cc8dfb88-9990-488f-9150-5c643143dcf1] Running
	I1206 20:01:17.444587  115078 system_pods.go:89] "kube-proxy-zgqvt" [550b2491-c14f-47c4-82d5-1301fa351305] Running
	I1206 20:01:17.444592  115078 system_pods.go:89] "kube-scheduler-no-preload-989559" [53a5031e-51aa-4867-88ff-1c7972a0cfa7] Running
	I1206 20:01:17.444602  115078 system_pods.go:89] "metrics-server-57f55c9bc5-vz7qc" [97c1bcd2-eabc-4029-bb02-5bbfd4d96c0f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:17.444608  115078 system_pods.go:89] "storage-provisioner" [c4d98de3-12ec-47f6-a6a6-f1dc61b479be] Running
	I1206 20:01:17.444619  115078 system_pods.go:126] duration metric: took 6.832576ms to wait for k8s-apps to be running ...
	I1206 20:01:17.444629  115078 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 20:01:17.444687  115078 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 20:01:17.464821  115078 system_svc.go:56] duration metric: took 20.181153ms WaitForService to wait for kubelet.
	I1206 20:01:17.464866  115078 kubeadm.go:581] duration metric: took 4m24.398841426s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1206 20:01:17.464894  115078 node_conditions.go:102] verifying NodePressure condition ...
	I1206 20:01:17.467938  115078 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1206 20:01:17.467964  115078 node_conditions.go:123] node cpu capacity is 2
	I1206 20:01:17.467975  115078 node_conditions.go:105] duration metric: took 3.076458ms to run NodePressure ...
	I1206 20:01:17.467988  115078 start.go:228] waiting for startup goroutines ...
	I1206 20:01:17.467994  115078 start.go:233] waiting for cluster config update ...
	I1206 20:01:17.468004  115078 start.go:242] writing updated cluster config ...
	I1206 20:01:17.468290  115078 ssh_runner.go:195] Run: rm -f paused
	I1206 20:01:17.523451  115078 start.go:600] kubectl: 1.28.4, cluster: 1.29.0-rc.1 (minor skew: 1)
	I1206 20:01:17.525609  115078 out.go:177] * Done! kubectl is now configured to use "no-preload-989559" cluster and "default" namespace by default
	I1206 20:01:13.862479  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:13.862506  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:13.862512  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:13.862519  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:13.862523  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:13.862542  115217 retry.go:31] will retry after 1.299046526s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:15.166601  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:15.166630  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:15.166635  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:15.166642  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:15.166647  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:15.166667  115217 retry.go:31] will retry after 1.832151574s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:17.005707  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:17.005739  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:17.005746  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:17.005754  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:17.005774  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:17.005797  115217 retry.go:31] will retry after 1.796371959s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:18.808729  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:18.808757  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:18.808763  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:18.808770  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:18.808775  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:18.808792  115217 retry.go:31] will retry after 2.814845209s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:21.630762  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:21.630791  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:21.630796  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:21.630811  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:21.630816  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:21.630834  115217 retry.go:31] will retry after 2.866148194s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:24.502168  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:24.502198  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:24.502203  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:24.502211  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:24.502215  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:24.502233  115217 retry.go:31] will retry after 3.777894628s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:28.284776  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:28.284812  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:28.284818  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:28.284825  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:28.284829  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:28.284847  115217 retry.go:31] will retry after 4.837538668s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:33.127301  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:33.127330  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:33.127336  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:33.127344  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:33.127349  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:33.127370  115217 retry.go:31] will retry after 6.833662344s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:39.966417  115217 system_pods.go:86] 5 kube-system pods found
	I1206 20:01:39.966450  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:39.966458  115217 system_pods.go:89] "kube-apiserver-old-k8s-version-448851" [ecace4aa-bc86-43ed-9067-365504abbf70] Pending
	I1206 20:01:39.966465  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:39.966476  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:39.966483  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:39.966504  115217 retry.go:31] will retry after 9.204033337s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:49.176395  115217 system_pods.go:86] 8 kube-system pods found
	I1206 20:01:49.176434  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:49.176442  115217 system_pods.go:89] "etcd-old-k8s-version-448851" [91d55b2e-4361-4615-a99c-d1338c427d81] Pending
	I1206 20:01:49.176450  115217 system_pods.go:89] "kube-apiserver-old-k8s-version-448851" [ecace4aa-bc86-43ed-9067-365504abbf70] Running
	I1206 20:01:49.176457  115217 system_pods.go:89] "kube-controller-manager-old-k8s-version-448851" [cf55eb16-4a36-4d70-bb22-4cab5f9f7358] Running
	I1206 20:01:49.176462  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:49.176469  115217 system_pods.go:89] "kube-scheduler-old-k8s-version-448851" [373cb698-190a-480d-ac74-4ea990474ad1] Pending
	I1206 20:01:49.176479  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:49.176487  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:49.176511  115217 retry.go:31] will retry after 9.456016194s: missing components: etcd, kube-scheduler
	I1206 20:01:58.638807  115217 system_pods.go:86] 8 kube-system pods found
	I1206 20:01:58.638837  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:58.638842  115217 system_pods.go:89] "etcd-old-k8s-version-448851" [91d55b2e-4361-4615-a99c-d1338c427d81] Running
	I1206 20:01:58.638847  115217 system_pods.go:89] "kube-apiserver-old-k8s-version-448851" [ecace4aa-bc86-43ed-9067-365504abbf70] Running
	I1206 20:01:58.638851  115217 system_pods.go:89] "kube-controller-manager-old-k8s-version-448851" [cf55eb16-4a36-4d70-bb22-4cab5f9f7358] Running
	I1206 20:01:58.638855  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:58.638861  115217 system_pods.go:89] "kube-scheduler-old-k8s-version-448851" [373cb698-190a-480d-ac74-4ea990474ad1] Running
	I1206 20:01:58.638867  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:58.638872  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:58.638879  115217 system_pods.go:126] duration metric: took 49.495986809s to wait for k8s-apps to be running ...
	I1206 20:01:58.638886  115217 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 20:01:58.638935  115217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 20:01:58.654683  115217 system_svc.go:56] duration metric: took 15.783018ms WaitForService to wait for kubelet.
	I1206 20:01:58.654715  115217 kubeadm.go:581] duration metric: took 58.383258338s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1206 20:01:58.654738  115217 node_conditions.go:102] verifying NodePressure condition ...
	I1206 20:01:58.659189  115217 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1206 20:01:58.659215  115217 node_conditions.go:123] node cpu capacity is 2
	I1206 20:01:58.659226  115217 node_conditions.go:105] duration metric: took 4.482979ms to run NodePressure ...
	I1206 20:01:58.659239  115217 start.go:228] waiting for startup goroutines ...
	I1206 20:01:58.659245  115217 start.go:233] waiting for cluster config update ...
	I1206 20:01:58.659255  115217 start.go:242] writing updated cluster config ...
	I1206 20:01:58.659522  115217 ssh_runner.go:195] Run: rm -f paused
	I1206 20:01:58.710716  115217 start.go:600] kubectl: 1.28.4, cluster: 1.16.0 (minor skew: 12)
	I1206 20:01:58.713372  115217 out.go:177] 
	W1206 20:01:58.714711  115217 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.16.0.
	I1206 20:01:58.716208  115217 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1206 20:01:58.717734  115217 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-448851" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-12-06 19:55:16 UTC, ends at Wed 2023-12-06 20:09:52 UTC. --
	Dec 06 20:09:52 default-k8s-diff-port-380424 crio[725]: time="2023-12-06 20:09:52.497590178Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701893392497569329,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=95510339-8f93-4baf-8ddc-e6d7aeda0454 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 20:09:52 default-k8s-diff-port-380424 crio[725]: time="2023-12-06 20:09:52.498201285Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d6e2b675-8bd0-4a75-af7b-007d542de76a name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:09:52 default-k8s-diff-port-380424 crio[725]: time="2023-12-06 20:09:52.498305368Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d6e2b675-8bd0-4a75-af7b-007d542de76a name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:09:52 default-k8s-diff-port-380424 crio[725]: time="2023-12-06 20:09:52.498580550Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c9aadff3bd822709562dbf1a0ded031ba2c2ea54884c53d782071174d0738260,PodSandboxId:6c07cabd56c24c42465e45099899d24b36090c98f56a975138ad497c56a513e6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701892848807274583,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1def8b1-c6bb-48df-b2f2-34867a409cb7,},Annotations:map[string]string{io.kubernetes.container.hash: 11efe436,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdab86736d83b0ca2134e4add0ac6f9c66685fe48dfb3b07b5c77fed2f1448b0,PodSandboxId:7de0529ee18ead08da0f8418c465ad47a21bc3777030b903c0847bb4096b04c7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701892848479892462,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-khh5n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acac843d-9849-4bda-af66-2422b319665e,},Annotations:map[string]string{io.kubernetes.container.hash: 65741ac7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32578a0cf908fc0cb5caaac759149a35b7020bcc4fd563cc8be8358bbe3c5d4e,PodSandboxId:64620387bce08d831b42963f73dc797420c7eae9e8ef8b80bb047c163b1c855e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701892847807068316,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-x6p7t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de75d299-fede-4fe1-a748-31720acc76eb,},Annotations:map[string]string{io.kubernetes.container.hash: b38db4a4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae6ebd5fabd5ae8a42c7e81c50097899af6c1e0c0d32038ed24223f5dfd13f94,PodSandboxId:73ac3548d3b18a7d2de12f10c3fe5f31dc0728cab68014566bcc0aa6fba7c2b3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701892822761370901,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-380424,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4f020be2b72e6574
d4b4b145d3c3d20,},Annotations:map[string]string{io.kubernetes.container.hash: 9e075002,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23de0ede546b1ebf1e05556778d0bd15c476ba99f41924c568b5d9b445b97ffe,PodSandboxId:c1741aadbbce663c805c78d510a6fb88f97754a4368a621f144ef23a1cec3522,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701892822676317629,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-380424,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b3422bb291fb3c20
7445e0bd656b0c3,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45732ee62285b86989592cc56e3154151c04101ed8fe9b617ec01b515d05332f,PodSandboxId:b171f1df8871ec4eda57cf566603b0316772b0b5bd70edfc1f1b4edf157bb146,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701892822589934806,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-380424,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 3650c54206015f5f73ea260c72d54d27,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1559f7cdd0f70169ed3fd8c988f56860f427f6ecfeb7975274ee4bc105624b1,PodSandboxId:3309269f7ecf4bb8053c0e9db0065dceb4f52a49a2f3bceb720a9146be09149d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701892822375986892,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-380424,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 6e14bbf982dabaf9ba842eeced09bf9f,},Annotations:map[string]string{io.kubernetes.container.hash: a27e8ed2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d6e2b675-8bd0-4a75-af7b-007d542de76a name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:09:52 default-k8s-diff-port-380424 crio[725]: time="2023-12-06 20:09:52.539343694Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=bd138800-e505-4c49-a773-01af41a967d9 name=/runtime.v1.RuntimeService/Version
	Dec 06 20:09:52 default-k8s-diff-port-380424 crio[725]: time="2023-12-06 20:09:52.539400419Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=bd138800-e505-4c49-a773-01af41a967d9 name=/runtime.v1.RuntimeService/Version
	Dec 06 20:09:52 default-k8s-diff-port-380424 crio[725]: time="2023-12-06 20:09:52.541278931Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=9e28b66e-b611-44d4-8d28-9f74e50f663a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 20:09:52 default-k8s-diff-port-380424 crio[725]: time="2023-12-06 20:09:52.541738969Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701893392541723691,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=9e28b66e-b611-44d4-8d28-9f74e50f663a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 20:09:52 default-k8s-diff-port-380424 crio[725]: time="2023-12-06 20:09:52.542311979Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7d46890e-e6de-49cb-8165-b4d4048c8015 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:09:52 default-k8s-diff-port-380424 crio[725]: time="2023-12-06 20:09:52.542380251Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7d46890e-e6de-49cb-8165-b4d4048c8015 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:09:52 default-k8s-diff-port-380424 crio[725]: time="2023-12-06 20:09:52.542655367Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c9aadff3bd822709562dbf1a0ded031ba2c2ea54884c53d782071174d0738260,PodSandboxId:6c07cabd56c24c42465e45099899d24b36090c98f56a975138ad497c56a513e6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701892848807274583,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1def8b1-c6bb-48df-b2f2-34867a409cb7,},Annotations:map[string]string{io.kubernetes.container.hash: 11efe436,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdab86736d83b0ca2134e4add0ac6f9c66685fe48dfb3b07b5c77fed2f1448b0,PodSandboxId:7de0529ee18ead08da0f8418c465ad47a21bc3777030b903c0847bb4096b04c7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701892848479892462,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-khh5n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acac843d-9849-4bda-af66-2422b319665e,},Annotations:map[string]string{io.kubernetes.container.hash: 65741ac7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32578a0cf908fc0cb5caaac759149a35b7020bcc4fd563cc8be8358bbe3c5d4e,PodSandboxId:64620387bce08d831b42963f73dc797420c7eae9e8ef8b80bb047c163b1c855e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701892847807068316,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-x6p7t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de75d299-fede-4fe1-a748-31720acc76eb,},Annotations:map[string]string{io.kubernetes.container.hash: b38db4a4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae6ebd5fabd5ae8a42c7e81c50097899af6c1e0c0d32038ed24223f5dfd13f94,PodSandboxId:73ac3548d3b18a7d2de12f10c3fe5f31dc0728cab68014566bcc0aa6fba7c2b3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701892822761370901,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-380424,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4f020be2b72e6574
d4b4b145d3c3d20,},Annotations:map[string]string{io.kubernetes.container.hash: 9e075002,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23de0ede546b1ebf1e05556778d0bd15c476ba99f41924c568b5d9b445b97ffe,PodSandboxId:c1741aadbbce663c805c78d510a6fb88f97754a4368a621f144ef23a1cec3522,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701892822676317629,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-380424,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b3422bb291fb3c20
7445e0bd656b0c3,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45732ee62285b86989592cc56e3154151c04101ed8fe9b617ec01b515d05332f,PodSandboxId:b171f1df8871ec4eda57cf566603b0316772b0b5bd70edfc1f1b4edf157bb146,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701892822589934806,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-380424,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 3650c54206015f5f73ea260c72d54d27,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1559f7cdd0f70169ed3fd8c988f56860f427f6ecfeb7975274ee4bc105624b1,PodSandboxId:3309269f7ecf4bb8053c0e9db0065dceb4f52a49a2f3bceb720a9146be09149d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701892822375986892,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-380424,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 6e14bbf982dabaf9ba842eeced09bf9f,},Annotations:map[string]string{io.kubernetes.container.hash: a27e8ed2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7d46890e-e6de-49cb-8165-b4d4048c8015 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:09:52 default-k8s-diff-port-380424 crio[725]: time="2023-12-06 20:09:52.587134267Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=21e7d24a-19cc-49bb-bfd1-106dfbee4a3b name=/runtime.v1.RuntimeService/Version
	Dec 06 20:09:52 default-k8s-diff-port-380424 crio[725]: time="2023-12-06 20:09:52.587277365Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=21e7d24a-19cc-49bb-bfd1-106dfbee4a3b name=/runtime.v1.RuntimeService/Version
	Dec 06 20:09:52 default-k8s-diff-port-380424 crio[725]: time="2023-12-06 20:09:52.588249754Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=3aaa7033-05f4-4fc7-808f-275a28da2db3 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 20:09:52 default-k8s-diff-port-380424 crio[725]: time="2023-12-06 20:09:52.588712144Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701893392588697118,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=3aaa7033-05f4-4fc7-808f-275a28da2db3 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 20:09:52 default-k8s-diff-port-380424 crio[725]: time="2023-12-06 20:09:52.589172036Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=af2da708-029a-452b-bbce-5699296beca9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:09:52 default-k8s-diff-port-380424 crio[725]: time="2023-12-06 20:09:52.589243776Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=af2da708-029a-452b-bbce-5699296beca9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:09:52 default-k8s-diff-port-380424 crio[725]: time="2023-12-06 20:09:52.589403189Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c9aadff3bd822709562dbf1a0ded031ba2c2ea54884c53d782071174d0738260,PodSandboxId:6c07cabd56c24c42465e45099899d24b36090c98f56a975138ad497c56a513e6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701892848807274583,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1def8b1-c6bb-48df-b2f2-34867a409cb7,},Annotations:map[string]string{io.kubernetes.container.hash: 11efe436,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdab86736d83b0ca2134e4add0ac6f9c66685fe48dfb3b07b5c77fed2f1448b0,PodSandboxId:7de0529ee18ead08da0f8418c465ad47a21bc3777030b903c0847bb4096b04c7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701892848479892462,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-khh5n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acac843d-9849-4bda-af66-2422b319665e,},Annotations:map[string]string{io.kubernetes.container.hash: 65741ac7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32578a0cf908fc0cb5caaac759149a35b7020bcc4fd563cc8be8358bbe3c5d4e,PodSandboxId:64620387bce08d831b42963f73dc797420c7eae9e8ef8b80bb047c163b1c855e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701892847807068316,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-x6p7t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de75d299-fede-4fe1-a748-31720acc76eb,},Annotations:map[string]string{io.kubernetes.container.hash: b38db4a4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae6ebd5fabd5ae8a42c7e81c50097899af6c1e0c0d32038ed24223f5dfd13f94,PodSandboxId:73ac3548d3b18a7d2de12f10c3fe5f31dc0728cab68014566bcc0aa6fba7c2b3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701892822761370901,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-380424,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4f020be2b72e6574
d4b4b145d3c3d20,},Annotations:map[string]string{io.kubernetes.container.hash: 9e075002,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23de0ede546b1ebf1e05556778d0bd15c476ba99f41924c568b5d9b445b97ffe,PodSandboxId:c1741aadbbce663c805c78d510a6fb88f97754a4368a621f144ef23a1cec3522,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701892822676317629,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-380424,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b3422bb291fb3c20
7445e0bd656b0c3,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45732ee62285b86989592cc56e3154151c04101ed8fe9b617ec01b515d05332f,PodSandboxId:b171f1df8871ec4eda57cf566603b0316772b0b5bd70edfc1f1b4edf157bb146,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701892822589934806,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-380424,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 3650c54206015f5f73ea260c72d54d27,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1559f7cdd0f70169ed3fd8c988f56860f427f6ecfeb7975274ee4bc105624b1,PodSandboxId:3309269f7ecf4bb8053c0e9db0065dceb4f52a49a2f3bceb720a9146be09149d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701892822375986892,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-380424,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 6e14bbf982dabaf9ba842eeced09bf9f,},Annotations:map[string]string{io.kubernetes.container.hash: a27e8ed2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=af2da708-029a-452b-bbce-5699296beca9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:09:52 default-k8s-diff-port-380424 crio[725]: time="2023-12-06 20:09:52.627363813Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=1f8fa3f7-9e45-4aaa-8bbc-2a325a8a1bac name=/runtime.v1.RuntimeService/Version
	Dec 06 20:09:52 default-k8s-diff-port-380424 crio[725]: time="2023-12-06 20:09:52.627508569Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=1f8fa3f7-9e45-4aaa-8bbc-2a325a8a1bac name=/runtime.v1.RuntimeService/Version
	Dec 06 20:09:52 default-k8s-diff-port-380424 crio[725]: time="2023-12-06 20:09:52.628878916Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=48839aa5-8173-48f2-b06c-a259e0e7d3b0 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 20:09:52 default-k8s-diff-port-380424 crio[725]: time="2023-12-06 20:09:52.629322716Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701893392629308795,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=48839aa5-8173-48f2-b06c-a259e0e7d3b0 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 20:09:52 default-k8s-diff-port-380424 crio[725]: time="2023-12-06 20:09:52.630277808Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=061ef4d8-b62f-4024-80e5-f645eadcb564 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:09:52 default-k8s-diff-port-380424 crio[725]: time="2023-12-06 20:09:52.630324828Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=061ef4d8-b62f-4024-80e5-f645eadcb564 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:09:52 default-k8s-diff-port-380424 crio[725]: time="2023-12-06 20:09:52.630535064Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c9aadff3bd822709562dbf1a0ded031ba2c2ea54884c53d782071174d0738260,PodSandboxId:6c07cabd56c24c42465e45099899d24b36090c98f56a975138ad497c56a513e6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701892848807274583,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1def8b1-c6bb-48df-b2f2-34867a409cb7,},Annotations:map[string]string{io.kubernetes.container.hash: 11efe436,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdab86736d83b0ca2134e4add0ac6f9c66685fe48dfb3b07b5c77fed2f1448b0,PodSandboxId:7de0529ee18ead08da0f8418c465ad47a21bc3777030b903c0847bb4096b04c7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701892848479892462,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-khh5n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acac843d-9849-4bda-af66-2422b319665e,},Annotations:map[string]string{io.kubernetes.container.hash: 65741ac7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32578a0cf908fc0cb5caaac759149a35b7020bcc4fd563cc8be8358bbe3c5d4e,PodSandboxId:64620387bce08d831b42963f73dc797420c7eae9e8ef8b80bb047c163b1c855e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701892847807068316,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-x6p7t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de75d299-fede-4fe1-a748-31720acc76eb,},Annotations:map[string]string{io.kubernetes.container.hash: b38db4a4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae6ebd5fabd5ae8a42c7e81c50097899af6c1e0c0d32038ed24223f5dfd13f94,PodSandboxId:73ac3548d3b18a7d2de12f10c3fe5f31dc0728cab68014566bcc0aa6fba7c2b3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701892822761370901,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-380424,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4f020be2b72e6574
d4b4b145d3c3d20,},Annotations:map[string]string{io.kubernetes.container.hash: 9e075002,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23de0ede546b1ebf1e05556778d0bd15c476ba99f41924c568b5d9b445b97ffe,PodSandboxId:c1741aadbbce663c805c78d510a6fb88f97754a4368a621f144ef23a1cec3522,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701892822676317629,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-380424,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b3422bb291fb3c20
7445e0bd656b0c3,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45732ee62285b86989592cc56e3154151c04101ed8fe9b617ec01b515d05332f,PodSandboxId:b171f1df8871ec4eda57cf566603b0316772b0b5bd70edfc1f1b4edf157bb146,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701892822589934806,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-380424,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 3650c54206015f5f73ea260c72d54d27,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1559f7cdd0f70169ed3fd8c988f56860f427f6ecfeb7975274ee4bc105624b1,PodSandboxId:3309269f7ecf4bb8053c0e9db0065dceb4f52a49a2f3bceb720a9146be09149d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701892822375986892,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-380424,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 6e14bbf982dabaf9ba842eeced09bf9f,},Annotations:map[string]string{io.kubernetes.container.hash: a27e8ed2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=061ef4d8-b62f-4024-80e5-f645eadcb564 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c9aadff3bd822       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   6c07cabd56c24       storage-provisioner
	cdab86736d83b       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   9 minutes ago       Running             kube-proxy                0                   7de0529ee18ea       kube-proxy-khh5n
	32578a0cf908f       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   9 minutes ago       Running             coredns                   0                   64620387bce08       coredns-5dd5756b68-x6p7t
	ae6ebd5fabd5a       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   9 minutes ago       Running             etcd                      2                   73ac3548d3b18       etcd-default-k8s-diff-port-380424
	23de0ede546b1       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   9 minutes ago       Running             kube-scheduler            2                   c1741aadbbce6       kube-scheduler-default-k8s-diff-port-380424
	45732ee62285b       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   9 minutes ago       Running             kube-controller-manager   2                   b171f1df8871e       kube-controller-manager-default-k8s-diff-port-380424
	f1559f7cdd0f7       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   9 minutes ago       Running             kube-apiserver            2                   3309269f7ecf4       kube-apiserver-default-k8s-diff-port-380424
	
	* 
	* ==> coredns [32578a0cf908fc0cb5caaac759149a35b7020bcc4fd563cc8be8358bbe3c5d4e] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-diff-port-380424
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-380424
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=31a3600ce72029d920a55140bbc6d0705e357503
	                    minikube.k8s.io/name=default-k8s-diff-port-380424
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_06T20_00_30_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 06 Dec 2023 20:00:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-380424
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 06 Dec 2023 20:09:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 06 Dec 2023 20:05:57 +0000   Wed, 06 Dec 2023 20:00:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 06 Dec 2023 20:05:57 +0000   Wed, 06 Dec 2023 20:00:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 06 Dec 2023 20:05:57 +0000   Wed, 06 Dec 2023 20:00:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 06 Dec 2023 20:05:57 +0000   Wed, 06 Dec 2023 20:00:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.22
	  Hostname:    default-k8s-diff-port-380424
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 f8a1bdeb7e4d419e931c84253ccf1761
	  System UUID:                f8a1bdeb-7e4d-419e-931c-84253ccf1761
	  Boot ID:                    398861ae-9d73-4692-a98d-772a0cb22307
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-x6p7t                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m9s
	  kube-system                 etcd-default-k8s-diff-port-380424                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-apiserver-default-k8s-diff-port-380424             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-380424    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-proxy-khh5n                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m9s
	  kube-system                 kube-scheduler-default-k8s-diff-port-380424             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 metrics-server-57f55c9bc5-xpbtp                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m5s
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m3s                   kube-proxy       
	  Normal  Starting                 9m31s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m31s (x8 over 9m31s)  kubelet          Node default-k8s-diff-port-380424 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m31s (x8 over 9m31s)  kubelet          Node default-k8s-diff-port-380424 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m31s (x7 over 9m31s)  kubelet          Node default-k8s-diff-port-380424 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m22s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m22s                  kubelet          Node default-k8s-diff-port-380424 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m22s                  kubelet          Node default-k8s-diff-port-380424 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m22s                  kubelet          Node default-k8s-diff-port-380424 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m9s                   node-controller  Node default-k8s-diff-port-380424 event: Registered Node default-k8s-diff-port-380424 in Controller
	
	* 
	* ==> dmesg <==
	* [Dec 6 19:55] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.067869] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.515663] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.529510] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.145082] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.495817] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.068363] systemd-fstab-generator[651]: Ignoring "noauto" for root device
	[  +0.127794] systemd-fstab-generator[662]: Ignoring "noauto" for root device
	[  +0.157712] systemd-fstab-generator[675]: Ignoring "noauto" for root device
	[  +0.102672] systemd-fstab-generator[686]: Ignoring "noauto" for root device
	[  +0.248073] systemd-fstab-generator[710]: Ignoring "noauto" for root device
	[ +17.797470] systemd-fstab-generator[924]: Ignoring "noauto" for root device
	[Dec 6 19:56] kauditd_printk_skb: 29 callbacks suppressed
	[Dec 6 20:00] systemd-fstab-generator[3533]: Ignoring "noauto" for root device
	[ +10.287204] systemd-fstab-generator[3865]: Ignoring "noauto" for root device
	[ +15.983459] kauditd_printk_skb: 2 callbacks suppressed
	
	* 
	* ==> etcd [ae6ebd5fabd5ae8a42c7e81c50097899af6c1e0c0d32038ed24223f5dfd13f94] <==
	* {"level":"info","ts":"2023-12-06T20:00:24.279309Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"80caca8c0a5d0f21 switched to configuration voters=(9280452684968431393)"}
	{"level":"info","ts":"2023-12-06T20:00:24.279524Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ceec70a6b9eea11d","local-member-id":"80caca8c0a5d0f21","added-peer-id":"80caca8c0a5d0f21","added-peer-peer-urls":["https://192.168.72.22:2380"]}
	{"level":"info","ts":"2023-12-06T20:00:24.280228Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-12-06T20:00:24.281623Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.22:2380"}
	{"level":"info","ts":"2023-12-06T20:00:24.281788Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.22:2380"}
	{"level":"info","ts":"2023-12-06T20:00:24.282674Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-12-06T20:00:24.282605Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"80caca8c0a5d0f21","initial-advertise-peer-urls":["https://192.168.72.22:2380"],"listen-peer-urls":["https://192.168.72.22:2380"],"advertise-client-urls":["https://192.168.72.22:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.22:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-12-06T20:00:24.539562Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"80caca8c0a5d0f21 is starting a new election at term 1"}
	{"level":"info","ts":"2023-12-06T20:00:24.539692Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"80caca8c0a5d0f21 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-12-06T20:00:24.539739Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"80caca8c0a5d0f21 received MsgPreVoteResp from 80caca8c0a5d0f21 at term 1"}
	{"level":"info","ts":"2023-12-06T20:00:24.539778Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"80caca8c0a5d0f21 became candidate at term 2"}
	{"level":"info","ts":"2023-12-06T20:00:24.539813Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"80caca8c0a5d0f21 received MsgVoteResp from 80caca8c0a5d0f21 at term 2"}
	{"level":"info","ts":"2023-12-06T20:00:24.539849Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"80caca8c0a5d0f21 became leader at term 2"}
	{"level":"info","ts":"2023-12-06T20:00:24.539882Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 80caca8c0a5d0f21 elected leader 80caca8c0a5d0f21 at term 2"}
	{"level":"info","ts":"2023-12-06T20:00:24.543805Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"80caca8c0a5d0f21","local-member-attributes":"{Name:default-k8s-diff-port-380424 ClientURLs:[https://192.168.72.22:2379]}","request-path":"/0/members/80caca8c0a5d0f21/attributes","cluster-id":"ceec70a6b9eea11d","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-06T20:00:24.543901Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-06T20:00:24.545503Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.22:2379"}
	{"level":"info","ts":"2023-12-06T20:00:24.545547Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-06T20:00:24.550733Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-06T20:00:24.550802Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-06T20:00:24.546222Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-06T20:00:24.558006Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-06T20:00:24.588605Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ceec70a6b9eea11d","local-member-id":"80caca8c0a5d0f21","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-06T20:00:24.588876Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-06T20:00:24.588985Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	* 
	* ==> kernel <==
	*  20:09:53 up 14 min,  0 users,  load average: 0.15, 0.22, 0.18
	Linux default-k8s-diff-port-380424 5.10.57 #1 SMP Fri Dec 1 04:24:04 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [f1559f7cdd0f70169ed3fd8c988f56860f427f6ecfeb7975274ee4bc105624b1] <==
	* W1206 20:05:27.862898       1 handler_proxy.go:93] no RequestInfo found in the context
	E1206 20:05:27.863050       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1206 20:05:27.863097       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1206 20:05:27.862980       1 handler_proxy.go:93] no RequestInfo found in the context
	E1206 20:05:27.863223       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1206 20:05:27.864422       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1206 20:06:26.745653       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1206 20:06:27.863867       1 handler_proxy.go:93] no RequestInfo found in the context
	E1206 20:06:27.864023       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1206 20:06:27.864057       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1206 20:06:27.865172       1 handler_proxy.go:93] no RequestInfo found in the context
	E1206 20:06:27.865290       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1206 20:06:27.865336       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1206 20:07:26.746105       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1206 20:08:26.745797       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1206 20:08:27.865294       1 handler_proxy.go:93] no RequestInfo found in the context
	E1206 20:08:27.865595       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1206 20:08:27.865642       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1206 20:08:27.865738       1 handler_proxy.go:93] no RequestInfo found in the context
	E1206 20:08:27.865855       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1206 20:08:27.867716       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1206 20:09:26.746243       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [45732ee62285b86989592cc56e3154151c04101ed8fe9b617ec01b515d05332f] <==
	* I1206 20:04:18.117668       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="295.81µs"
	E1206 20:04:43.958395       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1206 20:04:44.396257       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1206 20:05:13.967874       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1206 20:05:14.405891       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1206 20:05:43.975262       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1206 20:05:44.414719       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1206 20:06:13.981187       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1206 20:06:14.429049       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1206 20:06:43.988151       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1206 20:06:44.442743       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1206 20:06:54.116929       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="591.742µs"
	I1206 20:07:08.121690       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="324.147µs"
	E1206 20:07:13.994745       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1206 20:07:14.453285       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1206 20:07:44.001284       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1206 20:07:44.462865       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1206 20:08:14.008259       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1206 20:08:14.472954       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1206 20:08:44.014227       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1206 20:08:44.482015       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1206 20:09:14.020314       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1206 20:09:14.491807       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1206 20:09:44.027218       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1206 20:09:44.501359       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [cdab86736d83b0ca2134e4add0ac6f9c66685fe48dfb3b07b5c77fed2f1448b0] <==
	* I1206 20:00:48.926614       1 server_others.go:69] "Using iptables proxy"
	I1206 20:00:48.963875       1 node.go:141] Successfully retrieved node IP: 192.168.72.22
	I1206 20:00:49.068654       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1206 20:00:49.068731       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1206 20:00:49.072621       1 server_others.go:152] "Using iptables Proxier"
	I1206 20:00:49.073603       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1206 20:00:49.073841       1 server.go:846] "Version info" version="v1.28.4"
	I1206 20:00:49.074035       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 20:00:49.080040       1 config.go:188] "Starting service config controller"
	I1206 20:00:49.080929       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1206 20:00:49.080985       1 config.go:97] "Starting endpoint slice config controller"
	I1206 20:00:49.081007       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1206 20:00:49.086669       1 config.go:315] "Starting node config controller"
	I1206 20:00:49.086720       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1206 20:00:49.181945       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1206 20:00:49.182016       1 shared_informer.go:318] Caches are synced for service config
	I1206 20:00:49.187014       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [23de0ede546b1ebf1e05556778d0bd15c476ba99f41924c568b5d9b445b97ffe] <==
	* W1206 20:00:27.916562       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1206 20:00:27.916651       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1206 20:00:28.022138       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1206 20:00:28.022247       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1206 20:00:28.049602       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1206 20:00:28.049717       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1206 20:00:28.050362       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1206 20:00:28.050501       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1206 20:00:28.061305       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1206 20:00:28.061408       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1206 20:00:28.140170       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1206 20:00:28.140285       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1206 20:00:28.197958       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1206 20:00:28.198149       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1206 20:00:28.322742       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1206 20:00:28.322812       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1206 20:00:28.383862       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1206 20:00:28.383927       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1206 20:00:28.484929       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1206 20:00:28.484966       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1206 20:00:28.487018       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1206 20:00:28.487177       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1206 20:00:28.533069       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1206 20:00:28.533233       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I1206 20:00:30.990651       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-12-06 19:55:16 UTC, ends at Wed 2023-12-06 20:09:53 UTC. --
	Dec 06 20:07:08 default-k8s-diff-port-380424 kubelet[3872]: E1206 20:07:08.100333    3872 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xpbtp" podUID="280fb2bc-d8d8-4684-8be1-ec0ace47ef77"
	Dec 06 20:07:20 default-k8s-diff-port-380424 kubelet[3872]: E1206 20:07:20.099957    3872 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xpbtp" podUID="280fb2bc-d8d8-4684-8be1-ec0ace47ef77"
	Dec 06 20:07:31 default-k8s-diff-port-380424 kubelet[3872]: E1206 20:07:31.217641    3872 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 06 20:07:31 default-k8s-diff-port-380424 kubelet[3872]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 06 20:07:31 default-k8s-diff-port-380424 kubelet[3872]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 06 20:07:31 default-k8s-diff-port-380424 kubelet[3872]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 06 20:07:34 default-k8s-diff-port-380424 kubelet[3872]: E1206 20:07:34.099629    3872 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xpbtp" podUID="280fb2bc-d8d8-4684-8be1-ec0ace47ef77"
	Dec 06 20:07:46 default-k8s-diff-port-380424 kubelet[3872]: E1206 20:07:46.098733    3872 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xpbtp" podUID="280fb2bc-d8d8-4684-8be1-ec0ace47ef77"
	Dec 06 20:07:59 default-k8s-diff-port-380424 kubelet[3872]: E1206 20:07:59.100209    3872 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xpbtp" podUID="280fb2bc-d8d8-4684-8be1-ec0ace47ef77"
	Dec 06 20:08:12 default-k8s-diff-port-380424 kubelet[3872]: E1206 20:08:12.098803    3872 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xpbtp" podUID="280fb2bc-d8d8-4684-8be1-ec0ace47ef77"
	Dec 06 20:08:24 default-k8s-diff-port-380424 kubelet[3872]: E1206 20:08:24.098677    3872 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xpbtp" podUID="280fb2bc-d8d8-4684-8be1-ec0ace47ef77"
	Dec 06 20:08:31 default-k8s-diff-port-380424 kubelet[3872]: E1206 20:08:31.215225    3872 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 06 20:08:31 default-k8s-diff-port-380424 kubelet[3872]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 06 20:08:31 default-k8s-diff-port-380424 kubelet[3872]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 06 20:08:31 default-k8s-diff-port-380424 kubelet[3872]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 06 20:08:36 default-k8s-diff-port-380424 kubelet[3872]: E1206 20:08:36.098795    3872 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xpbtp" podUID="280fb2bc-d8d8-4684-8be1-ec0ace47ef77"
	Dec 06 20:08:51 default-k8s-diff-port-380424 kubelet[3872]: E1206 20:08:51.099683    3872 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xpbtp" podUID="280fb2bc-d8d8-4684-8be1-ec0ace47ef77"
	Dec 06 20:09:04 default-k8s-diff-port-380424 kubelet[3872]: E1206 20:09:04.098706    3872 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xpbtp" podUID="280fb2bc-d8d8-4684-8be1-ec0ace47ef77"
	Dec 06 20:09:19 default-k8s-diff-port-380424 kubelet[3872]: E1206 20:09:19.098526    3872 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xpbtp" podUID="280fb2bc-d8d8-4684-8be1-ec0ace47ef77"
	Dec 06 20:09:31 default-k8s-diff-port-380424 kubelet[3872]: E1206 20:09:31.099643    3872 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xpbtp" podUID="280fb2bc-d8d8-4684-8be1-ec0ace47ef77"
	Dec 06 20:09:31 default-k8s-diff-port-380424 kubelet[3872]: E1206 20:09:31.217035    3872 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 06 20:09:31 default-k8s-diff-port-380424 kubelet[3872]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 06 20:09:31 default-k8s-diff-port-380424 kubelet[3872]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 06 20:09:31 default-k8s-diff-port-380424 kubelet[3872]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 06 20:09:42 default-k8s-diff-port-380424 kubelet[3872]: E1206 20:09:42.098645    3872 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xpbtp" podUID="280fb2bc-d8d8-4684-8be1-ec0ace47ef77"
	
	* 
	* ==> storage-provisioner [c9aadff3bd822709562dbf1a0ded031ba2c2ea54884c53d782071174d0738260] <==
	* I1206 20:00:48.986771       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1206 20:00:49.005108       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1206 20:00:49.005413       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1206 20:00:49.022750       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1206 20:00:49.023096       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-380424_5d9f0bc5-eca4-46a5-be9a-f93670efd2e9!
	I1206 20:00:49.026161       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0b9a0450-fc18-4e96-8af1-f60dc2ead67b", APIVersion:"v1", ResourceVersion:"454", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-380424_5d9f0bc5-eca4-46a5-be9a-f93670efd2e9 became leader
	I1206 20:00:49.123761       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-380424_5d9f0bc5-eca4-46a5-be9a-f93670efd2e9!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-380424 -n default-k8s-diff-port-380424
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-380424 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-xpbtp
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-380424 describe pod metrics-server-57f55c9bc5-xpbtp
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-380424 describe pod metrics-server-57f55c9bc5-xpbtp: exit status 1 (79.297367ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-xpbtp" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-380424 describe pod metrics-server-57f55c9bc5-xpbtp: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (543.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-209025 -n embed-certs-209025
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-12-06 20:10:11.910963287 +0000 UTC m=+5385.995457095
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-209025 -n embed-certs-209025
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-209025 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-209025 logs -n 25: (1.772373386s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-459609 sudo cat                              | bridge-459609                | jenkins | v1.32.0 | 06 Dec 23 19:46 UTC | 06 Dec 23 19:46 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-459609 sudo                                  | bridge-459609                | jenkins | v1.32.0 | 06 Dec 23 19:46 UTC | 06 Dec 23 19:46 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-459609 sudo                                  | bridge-459609                | jenkins | v1.32.0 | 06 Dec 23 19:46 UTC | 06 Dec 23 19:46 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-459609 sudo                                  | bridge-459609                | jenkins | v1.32.0 | 06 Dec 23 19:46 UTC | 06 Dec 23 19:46 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-459609 sudo find                             | bridge-459609                | jenkins | v1.32.0 | 06 Dec 23 19:46 UTC | 06 Dec 23 19:46 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-459609 sudo crio                             | bridge-459609                | jenkins | v1.32.0 | 06 Dec 23 19:46 UTC | 06 Dec 23 19:46 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-459609                                       | bridge-459609                | jenkins | v1.32.0 | 06 Dec 23 19:46 UTC | 06 Dec 23 19:46 UTC |
	| delete  | -p                                                     | disable-driver-mounts-730405 | jenkins | v1.32.0 | 06 Dec 23 19:46 UTC | 06 Dec 23 19:46 UTC |
	|         | disable-driver-mounts-730405                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-380424 | jenkins | v1.32.0 | 06 Dec 23 19:46 UTC | 06 Dec 23 19:48 UTC |
	|         | default-k8s-diff-port-380424                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-989559             | no-preload-989559            | jenkins | v1.32.0 | 06 Dec 23 19:47 UTC | 06 Dec 23 19:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-989559                                   | no-preload-989559            | jenkins | v1.32.0 | 06 Dec 23 19:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-448851        | old-k8s-version-448851       | jenkins | v1.32.0 | 06 Dec 23 19:47 UTC | 06 Dec 23 19:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-448851                              | old-k8s-version-448851       | jenkins | v1.32.0 | 06 Dec 23 19:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-380424  | default-k8s-diff-port-380424 | jenkins | v1.32.0 | 06 Dec 23 19:48 UTC | 06 Dec 23 19:48 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-380424 | jenkins | v1.32.0 | 06 Dec 23 19:48 UTC |                     |
	|         | default-k8s-diff-port-380424                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-209025            | embed-certs-209025           | jenkins | v1.32.0 | 06 Dec 23 19:48 UTC | 06 Dec 23 19:48 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-209025                                  | embed-certs-209025           | jenkins | v1.32.0 | 06 Dec 23 19:48 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-989559                  | no-preload-989559            | jenkins | v1.32.0 | 06 Dec 23 19:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-989559                                   | no-preload-989559            | jenkins | v1.32.0 | 06 Dec 23 19:50 UTC | 06 Dec 23 20:01 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.1                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-448851             | old-k8s-version-448851       | jenkins | v1.32.0 | 06 Dec 23 19:50 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-448851                              | old-k8s-version-448851       | jenkins | v1.32.0 | 06 Dec 23 19:50 UTC | 06 Dec 23 20:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-380424       | default-k8s-diff-port-380424 | jenkins | v1.32.0 | 06 Dec 23 19:50 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-209025                 | embed-certs-209025           | jenkins | v1.32.0 | 06 Dec 23 19:50 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-380424 | jenkins | v1.32.0 | 06 Dec 23 19:50 UTC | 06 Dec 23 20:00 UTC |
	|         | default-k8s-diff-port-380424                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-209025                                  | embed-certs-209025           | jenkins | v1.32.0 | 06 Dec 23 19:50 UTC | 06 Dec 23 20:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/06 19:50:49
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 19:50:49.512923  115591 out.go:296] Setting OutFile to fd 1 ...
	I1206 19:50:49.513070  115591 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 19:50:49.513079  115591 out.go:309] Setting ErrFile to fd 2...
	I1206 19:50:49.513084  115591 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 19:50:49.513305  115591 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17740-63652/.minikube/bin
	I1206 19:50:49.513900  115591 out.go:303] Setting JSON to false
	I1206 19:50:49.514822  115591 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":9200,"bootTime":1701883050,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 19:50:49.514886  115591 start.go:138] virtualization: kvm guest
	I1206 19:50:49.517831  115591 out.go:177] * [embed-certs-209025] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1206 19:50:49.519496  115591 notify.go:220] Checking for updates...
	I1206 19:50:49.519507  115591 out.go:177]   - MINIKUBE_LOCATION=17740
	I1206 19:50:49.521356  115591 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 19:50:49.523241  115591 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17740-63652/kubeconfig
	I1206 19:50:49.525016  115591 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17740-63652/.minikube
	I1206 19:50:49.526632  115591 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 19:50:49.528148  115591 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 19:50:49.530159  115591 config.go:182] Loaded profile config "embed-certs-209025": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 19:50:49.530586  115591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:50:49.530636  115591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:50:49.545128  115591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46579
	I1206 19:50:49.545881  115591 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:50:49.547345  115591 main.go:141] libmachine: Using API Version  1
	I1206 19:50:49.547375  115591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:50:49.547739  115591 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:50:49.547926  115591 main.go:141] libmachine: (embed-certs-209025) Calling .DriverName
	I1206 19:50:49.548144  115591 driver.go:392] Setting default libvirt URI to qemu:///system
	I1206 19:50:49.548458  115591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:50:49.548506  115591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:50:49.562767  115591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42919
	I1206 19:50:49.563139  115591 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:50:49.563567  115591 main.go:141] libmachine: Using API Version  1
	I1206 19:50:49.563588  115591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:50:49.563913  115591 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:50:49.564112  115591 main.go:141] libmachine: (embed-certs-209025) Calling .DriverName
	I1206 19:50:49.600267  115591 out.go:177] * Using the kvm2 driver based on existing profile
	I1206 19:50:49.601977  115591 start.go:298] selected driver: kvm2
	I1206 19:50:49.601996  115591 start.go:902] validating driver "kvm2" against &{Name:embed-certs-209025 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-209025 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.164 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 19:50:49.602089  115591 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 19:50:49.602812  115591 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 19:50:49.602891  115591 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17740-63652/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1206 19:50:49.617831  115591 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1206 19:50:49.618234  115591 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 19:50:49.618296  115591 cni.go:84] Creating CNI manager for ""
	I1206 19:50:49.618306  115591 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 19:50:49.618316  115591 start_flags.go:323] config:
	{Name:embed-certs-209025 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-209025 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.164 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikub
e-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 19:50:49.618468  115591 iso.go:125] acquiring lock: {Name:mk6e9c7dc90243dab7d2a6f322b4b6abe4dff6ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 19:50:49.620428  115591 out.go:177] * Starting control plane node embed-certs-209025 in cluster embed-certs-209025
	I1206 19:50:46.558601  115497 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1206 19:50:46.558636  115497 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1206 19:50:46.558644  115497 cache.go:56] Caching tarball of preloaded images
	I1206 19:50:46.558714  115497 preload.go:174] Found /home/jenkins/minikube-integration/17740-63652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1206 19:50:46.558724  115497 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1206 19:50:46.558837  115497 profile.go:148] Saving config to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/default-k8s-diff-port-380424/config.json ...
	I1206 19:50:46.559024  115497 start.go:365] acquiring machines lock for default-k8s-diff-port-380424: {Name:mk49ce640266d8c664a871ed4989f65c26b6fae1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1206 19:50:49.622242  115591 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1206 19:50:49.622298  115591 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1206 19:50:49.622320  115591 cache.go:56] Caching tarball of preloaded images
	I1206 19:50:49.622419  115591 preload.go:174] Found /home/jenkins/minikube-integration/17740-63652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1206 19:50:49.622431  115591 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1206 19:50:49.622525  115591 profile.go:148] Saving config to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/embed-certs-209025/config.json ...
	I1206 19:50:49.622798  115591 start.go:365] acquiring machines lock for embed-certs-209025: {Name:mk49ce640266d8c664a871ed4989f65c26b6fae1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1206 19:50:51.693503  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:50:54.765519  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:51:00.845535  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:51:03.917509  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:51:09.997591  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:51:13.069427  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:51:19.149482  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:51:22.221565  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:51:28.301531  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:51:31.373569  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:51:37.453523  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:51:40.525531  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:51:46.605538  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:51:49.677544  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:51:55.757544  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:51:58.829552  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:52:04.909569  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:52:07.981555  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:52:14.061549  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:52:17.133576  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:52:23.213558  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:52:26.285482  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:52:32.365550  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:52:35.437574  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:52:41.517473  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:52:44.589458  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:52:50.669534  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:52:53.741496  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:52:59.821528  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:53:02.893489  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:53:08.973534  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:53:12.045527  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:53:18.125473  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:53:21.197472  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:53:27.277533  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:53:30.349580  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:53:36.429514  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:53:39.501584  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:53:45.581524  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:53:48.653547  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:53:54.733543  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:53:57.805491  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:54:03.885571  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:54:06.957565  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:54:13.037470  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:54:16.109461  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:54:22.189477  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:54:25.261563  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:54:31.341534  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:54:34.413513  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:54:40.493530  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:54:43.497878  115217 start.go:369] acquired machines lock for "old-k8s-version-448851" in 4m25.369261381s
	I1206 19:54:43.497937  115217 start.go:96] Skipping create...Using existing machine configuration
	I1206 19:54:43.497949  115217 fix.go:54] fixHost starting: 
	I1206 19:54:43.498301  115217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:54:43.498331  115217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:54:43.513072  115217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33051
	I1206 19:54:43.513520  115217 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:54:43.514005  115217 main.go:141] libmachine: Using API Version  1
	I1206 19:54:43.514035  115217 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:54:43.514375  115217 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:54:43.514571  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .DriverName
	I1206 19:54:43.514716  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetState
	I1206 19:54:43.516245  115217 fix.go:102] recreateIfNeeded on old-k8s-version-448851: state=Stopped err=<nil>
	I1206 19:54:43.516266  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .DriverName
	W1206 19:54:43.516391  115217 fix.go:128] unexpected machine state, will restart: <nil>
	I1206 19:54:43.518413  115217 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-448851" ...
	I1206 19:54:43.495395  115078 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1206 19:54:43.495445  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHHostname
	I1206 19:54:43.497720  115078 machine.go:91] provisioned docker machine in 4m37.37101565s
	I1206 19:54:43.497766  115078 fix.go:56] fixHost completed within 4m37.395231745s
	I1206 19:54:43.497773  115078 start.go:83] releasing machines lock for "no-preload-989559", held for 4m37.395253694s
	W1206 19:54:43.497813  115078 start.go:694] error starting host: provision: host is not running
	W1206 19:54:43.497949  115078 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1206 19:54:43.497960  115078 start.go:709] Will try again in 5 seconds ...
	I1206 19:54:43.519752  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .Start
	I1206 19:54:43.519905  115217 main.go:141] libmachine: (old-k8s-version-448851) Ensuring networks are active...
	I1206 19:54:43.520785  115217 main.go:141] libmachine: (old-k8s-version-448851) Ensuring network default is active
	I1206 19:54:43.521155  115217 main.go:141] libmachine: (old-k8s-version-448851) Ensuring network mk-old-k8s-version-448851 is active
	I1206 19:54:43.521477  115217 main.go:141] libmachine: (old-k8s-version-448851) Getting domain xml...
	I1206 19:54:43.522123  115217 main.go:141] libmachine: (old-k8s-version-448851) Creating domain...
	I1206 19:54:44.758967  115217 main.go:141] libmachine: (old-k8s-version-448851) Waiting to get IP...
	I1206 19:54:44.759812  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:54:44.760194  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | unable to find current IP address of domain old-k8s-version-448851 in network mk-old-k8s-version-448851
	I1206 19:54:44.760255  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | I1206 19:54:44.760156  116186 retry.go:31] will retry after 298.997725ms: waiting for machine to come up
	I1206 19:54:45.061071  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:54:45.061521  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | unable to find current IP address of domain old-k8s-version-448851 in network mk-old-k8s-version-448851
	I1206 19:54:45.061545  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | I1206 19:54:45.061474  116186 retry.go:31] will retry after 338.263286ms: waiting for machine to come up
	I1206 19:54:45.401161  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:54:45.401614  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | unable to find current IP address of domain old-k8s-version-448851 in network mk-old-k8s-version-448851
	I1206 19:54:45.401641  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | I1206 19:54:45.401572  116186 retry.go:31] will retry after 468.987471ms: waiting for machine to come up
	I1206 19:54:45.872203  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:54:45.872644  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | unable to find current IP address of domain old-k8s-version-448851 in network mk-old-k8s-version-448851
	I1206 19:54:45.872675  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | I1206 19:54:45.872586  116186 retry.go:31] will retry after 447.252306ms: waiting for machine to come up
	I1206 19:54:46.321277  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:54:46.321583  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | unable to find current IP address of domain old-k8s-version-448851 in network mk-old-k8s-version-448851
	I1206 19:54:46.321609  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | I1206 19:54:46.321549  116186 retry.go:31] will retry after 591.206607ms: waiting for machine to come up
	I1206 19:54:46.913936  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:54:46.914351  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | unable to find current IP address of domain old-k8s-version-448851 in network mk-old-k8s-version-448851
	I1206 19:54:46.914412  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | I1206 19:54:46.914260  116186 retry.go:31] will retry after 888.979547ms: waiting for machine to come up
	I1206 19:54:47.805332  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:54:47.805783  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | unable to find current IP address of domain old-k8s-version-448851 in network mk-old-k8s-version-448851
	I1206 19:54:47.805814  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | I1206 19:54:47.805722  116186 retry.go:31] will retry after 1.088490978s: waiting for machine to come up
	I1206 19:54:48.499603  115078 start.go:365] acquiring machines lock for no-preload-989559: {Name:mk49ce640266d8c664a871ed4989f65c26b6fae1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1206 19:54:48.895892  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:54:48.896316  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | unable to find current IP address of domain old-k8s-version-448851 in network mk-old-k8s-version-448851
	I1206 19:54:48.896347  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | I1206 19:54:48.896249  116186 retry.go:31] will retry after 1.145932913s: waiting for machine to come up
	I1206 19:54:50.043740  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:54:50.044169  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | unable to find current IP address of domain old-k8s-version-448851 in network mk-old-k8s-version-448851
	I1206 19:54:50.044199  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | I1206 19:54:50.044136  116186 retry.go:31] will retry after 1.302468984s: waiting for machine to come up
	I1206 19:54:51.347696  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:54:51.348093  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | unable to find current IP address of domain old-k8s-version-448851 in network mk-old-k8s-version-448851
	I1206 19:54:51.348124  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | I1206 19:54:51.348039  116186 retry.go:31] will retry after 2.099836852s: waiting for machine to come up
	I1206 19:54:53.450166  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:54:53.450638  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | unable to find current IP address of domain old-k8s-version-448851 in network mk-old-k8s-version-448851
	I1206 19:54:53.450678  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | I1206 19:54:53.450566  116186 retry.go:31] will retry after 1.877757048s: waiting for machine to come up
	I1206 19:54:55.331257  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:54:55.331697  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | unable to find current IP address of domain old-k8s-version-448851 in network mk-old-k8s-version-448851
	I1206 19:54:55.331752  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | I1206 19:54:55.331671  116186 retry.go:31] will retry after 3.399849348s: waiting for machine to come up
	I1206 19:54:58.733325  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:54:58.733712  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | unable to find current IP address of domain old-k8s-version-448851 in network mk-old-k8s-version-448851
	I1206 19:54:58.733736  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | I1206 19:54:58.733664  116186 retry.go:31] will retry after 4.308323214s: waiting for machine to come up
	I1206 19:55:04.350333  115497 start.go:369] acquired machines lock for "default-k8s-diff-port-380424" in 4m17.791271724s
	I1206 19:55:04.350411  115497 start.go:96] Skipping create...Using existing machine configuration
	I1206 19:55:04.350426  115497 fix.go:54] fixHost starting: 
	I1206 19:55:04.350878  115497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:55:04.350927  115497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:55:04.367462  115497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36653
	I1206 19:55:04.367935  115497 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:55:04.368546  115497 main.go:141] libmachine: Using API Version  1
	I1206 19:55:04.368580  115497 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:55:04.368972  115497 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:55:04.369197  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .DriverName
	I1206 19:55:04.369417  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetState
	I1206 19:55:04.370940  115497 fix.go:102] recreateIfNeeded on default-k8s-diff-port-380424: state=Stopped err=<nil>
	I1206 19:55:04.370982  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .DriverName
	W1206 19:55:04.371135  115497 fix.go:128] unexpected machine state, will restart: <nil>
	I1206 19:55:04.373809  115497 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-380424" ...
	I1206 19:55:03.047055  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.047484  115217 main.go:141] libmachine: (old-k8s-version-448851) Found IP for machine: 192.168.61.33
	I1206 19:55:03.047516  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has current primary IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.047527  115217 main.go:141] libmachine: (old-k8s-version-448851) Reserving static IP address...
	I1206 19:55:03.048083  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "old-k8s-version-448851", mac: "52:54:00:91:ad:26", ip: "192.168.61.33"} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:03.048116  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | skip adding static IP to network mk-old-k8s-version-448851 - found existing host DHCP lease matching {name: "old-k8s-version-448851", mac: "52:54:00:91:ad:26", ip: "192.168.61.33"}
	I1206 19:55:03.048135  115217 main.go:141] libmachine: (old-k8s-version-448851) Reserved static IP address: 192.168.61.33
	I1206 19:55:03.048146  115217 main.go:141] libmachine: (old-k8s-version-448851) Waiting for SSH to be available...
	I1206 19:55:03.048158  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | Getting to WaitForSSH function...
	I1206 19:55:03.050347  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.050661  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:03.050682  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.050793  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | Using SSH client type: external
	I1206 19:55:03.050872  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | Using SSH private key: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/old-k8s-version-448851/id_rsa (-rw-------)
	I1206 19:55:03.050913  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.33 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17740-63652/.minikube/machines/old-k8s-version-448851/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1206 19:55:03.050935  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | About to run SSH command:
	I1206 19:55:03.050956  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | exit 0
	I1206 19:55:03.137326  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | SSH cmd err, output: <nil>: 
	I1206 19:55:03.137753  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetConfigRaw
	I1206 19:55:03.138415  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetIP
	I1206 19:55:03.140903  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.141314  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:03.141341  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.141671  115217 profile.go:148] Saving config to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/old-k8s-version-448851/config.json ...
	I1206 19:55:03.141899  115217 machine.go:88] provisioning docker machine ...
	I1206 19:55:03.141924  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .DriverName
	I1206 19:55:03.142133  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetMachineName
	I1206 19:55:03.142284  115217 buildroot.go:166] provisioning hostname "old-k8s-version-448851"
	I1206 19:55:03.142305  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetMachineName
	I1206 19:55:03.142511  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHHostname
	I1206 19:55:03.144778  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.145119  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:03.145144  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.145289  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHPort
	I1206 19:55:03.145451  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 19:55:03.145582  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 19:55:03.145705  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHUsername
	I1206 19:55:03.145829  115217 main.go:141] libmachine: Using SSH client type: native
	I1206 19:55:03.146319  115217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.33 22 <nil> <nil>}
	I1206 19:55:03.146343  115217 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-448851 && echo "old-k8s-version-448851" | sudo tee /etc/hostname
	I1206 19:55:03.270447  115217 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-448851
	
	I1206 19:55:03.270490  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHHostname
	I1206 19:55:03.273453  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.273769  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:03.273802  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.273957  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHPort
	I1206 19:55:03.274148  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 19:55:03.274326  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 19:55:03.274426  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHUsername
	I1206 19:55:03.274576  115217 main.go:141] libmachine: Using SSH client type: native
	I1206 19:55:03.274893  115217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.33 22 <nil> <nil>}
	I1206 19:55:03.274910  115217 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-448851' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-448851/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-448851' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 19:55:03.395200  115217 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1206 19:55:03.395232  115217 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17740-63652/.minikube CaCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17740-63652/.minikube}
	I1206 19:55:03.395281  115217 buildroot.go:174] setting up certificates
	I1206 19:55:03.395298  115217 provision.go:83] configureAuth start
	I1206 19:55:03.395320  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetMachineName
	I1206 19:55:03.395585  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetIP
	I1206 19:55:03.397989  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.398373  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:03.398405  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.398547  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHHostname
	I1206 19:55:03.400869  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.401196  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:03.401223  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.401369  115217 provision.go:138] copyHostCerts
	I1206 19:55:03.401492  115217 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem, removing ...
	I1206 19:55:03.401513  115217 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem
	I1206 19:55:03.401600  115217 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem (1082 bytes)
	I1206 19:55:03.401718  115217 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem, removing ...
	I1206 19:55:03.401730  115217 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem
	I1206 19:55:03.401778  115217 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem (1123 bytes)
	I1206 19:55:03.401857  115217 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem, removing ...
	I1206 19:55:03.401867  115217 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem
	I1206 19:55:03.401899  115217 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem (1679 bytes)
	I1206 19:55:03.401971  115217 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-448851 san=[192.168.61.33 192.168.61.33 localhost 127.0.0.1 minikube old-k8s-version-448851]
	I1206 19:55:03.655010  115217 provision.go:172] copyRemoteCerts
	I1206 19:55:03.655082  115217 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 19:55:03.655110  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHHostname
	I1206 19:55:03.657860  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.658301  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:03.658336  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.658529  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHPort
	I1206 19:55:03.658738  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 19:55:03.658914  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHUsername
	I1206 19:55:03.659068  115217 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/old-k8s-version-448851/id_rsa Username:docker}
	I1206 19:55:03.742021  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 19:55:03.765284  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1206 19:55:03.788562  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 19:55:03.811692  115217 provision.go:86] duration metric: configureAuth took 416.376347ms
	I1206 19:55:03.811722  115217 buildroot.go:189] setting minikube options for container-runtime
	I1206 19:55:03.811943  115217 config.go:182] Loaded profile config "old-k8s-version-448851": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1206 19:55:03.812058  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHHostname
	I1206 19:55:03.814518  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.814898  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:03.814934  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.815149  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHPort
	I1206 19:55:03.815371  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 19:55:03.815541  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 19:55:03.815663  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHUsername
	I1206 19:55:03.815787  115217 main.go:141] libmachine: Using SSH client type: native
	I1206 19:55:03.816094  115217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.33 22 <nil> <nil>}
	I1206 19:55:03.816121  115217 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 19:55:04.115752  115217 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 19:55:04.115780  115217 machine.go:91] provisioned docker machine in 973.864642ms
	I1206 19:55:04.115790  115217 start.go:300] post-start starting for "old-k8s-version-448851" (driver="kvm2")
	I1206 19:55:04.115802  115217 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 19:55:04.115825  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .DriverName
	I1206 19:55:04.116197  115217 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 19:55:04.116226  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHHostname
	I1206 19:55:04.119234  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:04.119559  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:04.119586  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:04.119801  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHPort
	I1206 19:55:04.120047  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 19:55:04.120228  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHUsername
	I1206 19:55:04.120391  115217 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/old-k8s-version-448851/id_rsa Username:docker}
	I1206 19:55:04.203195  115217 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 19:55:04.207210  115217 info.go:137] Remote host: Buildroot 2021.02.12
	I1206 19:55:04.207238  115217 filesync.go:126] Scanning /home/jenkins/minikube-integration/17740-63652/.minikube/addons for local assets ...
	I1206 19:55:04.207315  115217 filesync.go:126] Scanning /home/jenkins/minikube-integration/17740-63652/.minikube/files for local assets ...
	I1206 19:55:04.207392  115217 filesync.go:149] local asset: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem -> 708342.pem in /etc/ssl/certs
	I1206 19:55:04.207475  115217 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 19:55:04.215469  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem --> /etc/ssl/certs/708342.pem (1708 bytes)
	I1206 19:55:04.238407  115217 start.go:303] post-start completed in 122.598676ms
	I1206 19:55:04.238437  115217 fix.go:56] fixHost completed within 20.740486511s
	I1206 19:55:04.238467  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHHostname
	I1206 19:55:04.241147  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:04.241522  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:04.241558  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:04.241720  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHPort
	I1206 19:55:04.241992  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 19:55:04.242187  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 19:55:04.242346  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHUsername
	I1206 19:55:04.242488  115217 main.go:141] libmachine: Using SSH client type: native
	I1206 19:55:04.242801  115217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.33 22 <nil> <nil>}
	I1206 19:55:04.242813  115217 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1206 19:55:04.350154  115217 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701892504.298339573
	
	I1206 19:55:04.350177  115217 fix.go:206] guest clock: 1701892504.298339573
	I1206 19:55:04.350185  115217 fix.go:219] Guest: 2023-12-06 19:55:04.298339573 +0000 UTC Remote: 2023-12-06 19:55:04.238442081 +0000 UTC m=+286.264851054 (delta=59.897492ms)
	I1206 19:55:04.350206  115217 fix.go:190] guest clock delta is within tolerance: 59.897492ms
	I1206 19:55:04.350212  115217 start.go:83] releasing machines lock for "old-k8s-version-448851", held for 20.852295937s
	I1206 19:55:04.350240  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .DriverName
	I1206 19:55:04.350562  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetIP
	I1206 19:55:04.353070  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:04.353519  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:04.353547  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:04.353732  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .DriverName
	I1206 19:55:04.354331  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .DriverName
	I1206 19:55:04.354552  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .DriverName
	I1206 19:55:04.354641  115217 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 19:55:04.354689  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHHostname
	I1206 19:55:04.354815  115217 ssh_runner.go:195] Run: cat /version.json
	I1206 19:55:04.354844  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHHostname
	I1206 19:55:04.357316  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:04.357558  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:04.357703  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:04.357734  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:04.357841  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHPort
	I1206 19:55:04.358006  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:04.358031  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 19:55:04.358052  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:04.358161  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHPort
	I1206 19:55:04.358241  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHUsername
	I1206 19:55:04.358322  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 19:55:04.358448  115217 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/old-k8s-version-448851/id_rsa Username:docker}
	I1206 19:55:04.358575  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHUsername
	I1206 19:55:04.358734  115217 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/old-k8s-version-448851/id_rsa Username:docker}
	I1206 19:55:04.469402  115217 ssh_runner.go:195] Run: systemctl --version
	I1206 19:55:04.475231  115217 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 19:55:04.618312  115217 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 19:55:04.625482  115217 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 19:55:04.625557  115217 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 19:55:04.640251  115217 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1206 19:55:04.640281  115217 start.go:475] detecting cgroup driver to use...
	I1206 19:55:04.640368  115217 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 19:55:04.654153  115217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 19:55:04.666295  115217 docker.go:203] disabling cri-docker service (if available) ...
	I1206 19:55:04.666387  115217 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 19:55:04.678579  115217 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 19:55:04.692472  115217 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 19:55:04.793090  115217 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 19:55:04.909331  115217 docker.go:219] disabling docker service ...
	I1206 19:55:04.909399  115217 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 19:55:04.922479  115217 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 19:55:04.934122  115217 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 19:55:05.048844  115217 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 19:55:05.156415  115217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 19:55:05.172525  115217 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 19:55:05.190303  115217 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1206 19:55:05.190363  115217 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:55:05.199967  115217 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 19:55:05.200048  115217 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:55:05.209725  115217 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:55:05.218770  115217 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:55:05.227835  115217 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 19:55:05.237006  115217 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 19:55:05.244839  115217 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1206 19:55:05.244899  115217 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1206 19:55:05.256528  115217 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 19:55:05.266360  115217 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 19:55:05.387203  115217 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 19:55:05.555553  115217 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 19:55:05.555668  115217 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 19:55:05.564619  115217 start.go:543] Will wait 60s for crictl version
	I1206 19:55:05.564682  115217 ssh_runner.go:195] Run: which crictl
	I1206 19:55:05.568979  115217 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1206 19:55:05.611883  115217 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1206 19:55:05.611986  115217 ssh_runner.go:195] Run: crio --version
	I1206 19:55:05.666757  115217 ssh_runner.go:195] Run: crio --version
	I1206 19:55:05.725942  115217 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1206 19:55:04.375626  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .Start
	I1206 19:55:04.375819  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Ensuring networks are active...
	I1206 19:55:04.376548  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Ensuring network default is active
	I1206 19:55:04.376923  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Ensuring network mk-default-k8s-diff-port-380424 is active
	I1206 19:55:04.377416  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Getting domain xml...
	I1206 19:55:04.378003  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Creating domain...
	I1206 19:55:05.667493  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting to get IP...
	I1206 19:55:05.668629  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:05.669112  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | unable to find current IP address of domain default-k8s-diff-port-380424 in network mk-default-k8s-diff-port-380424
	I1206 19:55:05.669148  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | I1206 19:55:05.669064  116315 retry.go:31] will retry after 259.414087ms: waiting for machine to come up
	I1206 19:55:05.930773  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:05.931201  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | unable to find current IP address of domain default-k8s-diff-port-380424 in network mk-default-k8s-diff-port-380424
	I1206 19:55:05.931232  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | I1206 19:55:05.931129  116315 retry.go:31] will retry after 319.702286ms: waiting for machine to come up
	I1206 19:55:06.252911  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:06.253423  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | unable to find current IP address of domain default-k8s-diff-port-380424 in network mk-default-k8s-diff-port-380424
	I1206 19:55:06.253458  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | I1206 19:55:06.253363  116315 retry.go:31] will retry after 403.286071ms: waiting for machine to come up
	I1206 19:55:05.727444  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetIP
	I1206 19:55:05.730503  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:05.730864  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:05.730900  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:05.731151  115217 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1206 19:55:05.735826  115217 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 19:55:05.748254  115217 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1206 19:55:05.748312  115217 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 19:55:05.799380  115217 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1206 19:55:05.799468  115217 ssh_runner.go:195] Run: which lz4
	I1206 19:55:05.803715  115217 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1206 19:55:05.808059  115217 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1206 19:55:05.808093  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1206 19:55:07.624367  115217 crio.go:444] Took 1.820689 seconds to copy over tarball
	I1206 19:55:07.624452  115217 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1206 19:55:06.658075  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:06.658763  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | unable to find current IP address of domain default-k8s-diff-port-380424 in network mk-default-k8s-diff-port-380424
	I1206 19:55:06.658800  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | I1206 19:55:06.658710  116315 retry.go:31] will retry after 572.663186ms: waiting for machine to come up
	I1206 19:55:07.233562  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:07.233898  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | unable to find current IP address of domain default-k8s-diff-port-380424 in network mk-default-k8s-diff-port-380424
	I1206 19:55:07.233927  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | I1206 19:55:07.233861  116315 retry.go:31] will retry after 762.563485ms: waiting for machine to come up
	I1206 19:55:07.997980  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:07.998424  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | unable to find current IP address of domain default-k8s-diff-port-380424 in network mk-default-k8s-diff-port-380424
	I1206 19:55:07.998453  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | I1206 19:55:07.998368  116315 retry.go:31] will retry after 885.694635ms: waiting for machine to come up
	I1206 19:55:08.885521  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:08.885957  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | unable to find current IP address of domain default-k8s-diff-port-380424 in network mk-default-k8s-diff-port-380424
	I1206 19:55:08.885983  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | I1206 19:55:08.885918  116315 retry.go:31] will retry after 924.594214ms: waiting for machine to come up
	I1206 19:55:09.812796  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:09.813271  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | unable to find current IP address of domain default-k8s-diff-port-380424 in network mk-default-k8s-diff-port-380424
	I1206 19:55:09.813305  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | I1206 19:55:09.813205  116315 retry.go:31] will retry after 1.485258028s: waiting for machine to come up
	I1206 19:55:11.300830  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:11.301385  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | unable to find current IP address of domain default-k8s-diff-port-380424 in network mk-default-k8s-diff-port-380424
	I1206 19:55:11.301424  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | I1206 19:55:11.301315  116315 retry.go:31] will retry after 1.232055429s: waiting for machine to come up
	I1206 19:55:10.452537  115217 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.828052972s)
	I1206 19:55:10.452565  115217 crio.go:451] Took 2.828166 seconds to extract the tarball
	I1206 19:55:10.452574  115217 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1206 19:55:10.493620  115217 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 19:55:10.539181  115217 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1206 19:55:10.539218  115217 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1206 19:55:10.539312  115217 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1206 19:55:10.539318  115217 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 19:55:10.539358  115217 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1206 19:55:10.539364  115217 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1206 19:55:10.539515  115217 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1206 19:55:10.539529  115217 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1206 19:55:10.539331  115217 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1206 19:55:10.539572  115217 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1206 19:55:10.540875  115217 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1206 19:55:10.540888  115217 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1206 19:55:10.540931  115217 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1206 19:55:10.540936  115217 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1206 19:55:10.540879  115217 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1206 19:55:10.540875  115217 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1206 19:55:10.540880  115217 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1206 19:55:10.540879  115217 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 19:55:10.725027  115217 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1206 19:55:10.762761  115217 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1206 19:55:10.762810  115217 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1206 19:55:10.762862  115217 ssh_runner.go:195] Run: which crictl
	I1206 19:55:10.763731  115217 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 19:55:10.766312  115217 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1206 19:55:10.768181  115217 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1206 19:55:10.773115  115217 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1206 19:55:10.829543  115217 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1206 19:55:10.841186  115217 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1206 19:55:10.856309  115217 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1206 19:55:10.873212  115217 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1206 19:55:10.983390  115217 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1206 19:55:10.983444  115217 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1206 19:55:10.983463  115217 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1206 19:55:10.983498  115217 ssh_runner.go:195] Run: which crictl
	I1206 19:55:10.983510  115217 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1206 19:55:10.983530  115217 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1206 19:55:10.983564  115217 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1206 19:55:10.983628  115217 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1206 19:55:10.983663  115217 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1206 19:55:10.983672  115217 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1206 19:55:10.983700  115217 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1206 19:55:10.983712  115217 ssh_runner.go:195] Run: which crictl
	I1206 19:55:10.983567  115217 ssh_runner.go:195] Run: which crictl
	I1206 19:55:10.983730  115217 ssh_runner.go:195] Run: which crictl
	I1206 19:55:10.983802  115217 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1206 19:55:10.983829  115217 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1206 19:55:10.983861  115217 ssh_runner.go:195] Run: which crictl
	I1206 19:55:11.009102  115217 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1206 19:55:11.009135  115217 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1206 19:55:11.009152  115217 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1206 19:55:11.009211  115217 ssh_runner.go:195] Run: which crictl
	I1206 19:55:11.009254  115217 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1206 19:55:11.009273  115217 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1206 19:55:11.009307  115217 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1206 19:55:11.009342  115217 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1206 19:55:11.009355  115217 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1206 19:55:11.009388  115217 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1206 19:55:11.009390  115217 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1206 19:55:11.130238  115217 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1206 19:55:11.158336  115217 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1206 19:55:11.158375  115217 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1206 19:55:11.158431  115217 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1206 19:55:11.158438  115217 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1206 19:55:11.158507  115217 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1206 19:55:12.535831  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:12.536331  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | unable to find current IP address of domain default-k8s-diff-port-380424 in network mk-default-k8s-diff-port-380424
	I1206 19:55:12.536374  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | I1206 19:55:12.536253  116315 retry.go:31] will retry after 1.865303927s: waiting for machine to come up
	I1206 19:55:14.402935  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:14.403326  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | unable to find current IP address of domain default-k8s-diff-port-380424 in network mk-default-k8s-diff-port-380424
	I1206 19:55:14.403354  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | I1206 19:55:14.403268  116315 retry.go:31] will retry after 1.960994282s: waiting for machine to come up
	I1206 19:55:16.366289  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:16.366763  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | unable to find current IP address of domain default-k8s-diff-port-380424 in network mk-default-k8s-diff-port-380424
	I1206 19:55:16.366792  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | I1206 19:55:16.366689  116315 retry.go:31] will retry after 2.933451161s: waiting for machine to come up
	I1206 19:55:13.478881  115217 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0: (2.320421557s)
	I1206 19:55:13.478937  115217 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1206 19:55:13.478892  115217 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (2.469478111s)
	I1206 19:55:13.478983  115217 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1206 19:55:13.479043  115217 cache_images.go:92] LoadImages completed in 2.939808867s
	W1206 19:55:13.479149  115217 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0: no such file or directory
	I1206 19:55:13.479228  115217 ssh_runner.go:195] Run: crio config
	I1206 19:55:13.543270  115217 cni.go:84] Creating CNI manager for ""
	I1206 19:55:13.543302  115217 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 19:55:13.543328  115217 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1206 19:55:13.543355  115217 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.33 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-448851 NodeName:old-k8s-version-448851 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.33"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.33 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1206 19:55:13.543557  115217 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.33
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-448851"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.33
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.33"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-448851
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.61.33:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 19:55:13.543700  115217 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-448851 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.33
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-448851 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1206 19:55:13.543776  115217 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1206 19:55:13.554524  115217 binaries.go:44] Found k8s binaries, skipping transfer
	I1206 19:55:13.554611  115217 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 19:55:13.566752  115217 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I1206 19:55:13.586027  115217 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 19:55:13.603800  115217 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I1206 19:55:13.627098  115217 ssh_runner.go:195] Run: grep 192.168.61.33	control-plane.minikube.internal$ /etc/hosts
	I1206 19:55:13.632470  115217 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.33	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 19:55:13.651452  115217 certs.go:56] Setting up /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/old-k8s-version-448851 for IP: 192.168.61.33
	I1206 19:55:13.651507  115217 certs.go:190] acquiring lock for shared ca certs: {Name:mkf8fbf7b590617ef4dc6c3a4acb742ae26f89ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 19:55:13.651670  115217 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.key
	I1206 19:55:13.651748  115217 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.key
	I1206 19:55:13.651860  115217 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/old-k8s-version-448851/client.key
	I1206 19:55:13.651932  115217 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/old-k8s-version-448851/apiserver.key.efa8c2ad
	I1206 19:55:13.651994  115217 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/old-k8s-version-448851/proxy-client.key
	I1206 19:55:13.652142  115217 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834.pem (1338 bytes)
	W1206 19:55:13.652183  115217 certs.go:433] ignoring /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834_empty.pem, impossibly tiny 0 bytes
	I1206 19:55:13.652201  115217 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 19:55:13.652241  115217 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem (1082 bytes)
	I1206 19:55:13.652283  115217 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem (1123 bytes)
	I1206 19:55:13.652326  115217 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem (1679 bytes)
	I1206 19:55:13.652389  115217 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem (1708 bytes)
	I1206 19:55:13.653344  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/old-k8s-version-448851/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1206 19:55:13.687786  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/old-k8s-version-448851/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 19:55:13.723604  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/old-k8s-version-448851/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 19:55:13.756434  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/old-k8s-version-448851/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 19:55:13.789066  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 19:55:13.821087  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 19:55:13.849840  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 19:55:13.876520  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 19:55:13.901763  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem --> /usr/share/ca-certificates/708342.pem (1708 bytes)
	I1206 19:55:13.932106  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 19:55:13.961708  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834.pem --> /usr/share/ca-certificates/70834.pem (1338 bytes)
	I1206 19:55:13.991586  115217 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 19:55:14.009848  115217 ssh_runner.go:195] Run: openssl version
	I1206 19:55:14.017661  115217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/708342.pem && ln -fs /usr/share/ca-certificates/708342.pem /etc/ssl/certs/708342.pem"
	I1206 19:55:14.031103  115217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/708342.pem
	I1206 19:55:14.037142  115217 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  6 18:50 /usr/share/ca-certificates/708342.pem
	I1206 19:55:14.037212  115217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/708342.pem
	I1206 19:55:14.044737  115217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/708342.pem /etc/ssl/certs/3ec20f2e.0"
	I1206 19:55:14.058296  115217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1206 19:55:14.068591  115217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:55:14.073995  115217 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  6 18:41 /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:55:14.074067  115217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:55:14.079922  115217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1206 19:55:14.090541  115217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/70834.pem && ln -fs /usr/share/ca-certificates/70834.pem /etc/ssl/certs/70834.pem"
	I1206 19:55:14.100915  115217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/70834.pem
	I1206 19:55:14.106692  115217 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  6 18:50 /usr/share/ca-certificates/70834.pem
	I1206 19:55:14.106766  115217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/70834.pem
	I1206 19:55:14.112592  115217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/70834.pem /etc/ssl/certs/51391683.0"
	I1206 19:55:14.122630  115217 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1206 19:55:14.128544  115217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 19:55:14.136649  115217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 19:55:14.143060  115217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 19:55:14.151002  115217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 19:55:14.157202  115217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 19:55:14.163456  115217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 19:55:14.171607  115217 kubeadm.go:404] StartCluster: {Name:old-k8s-version-448851 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-448851 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.33 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 19:55:14.171720  115217 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 19:55:14.171771  115217 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 19:55:14.216630  115217 cri.go:89] found id: ""
	I1206 19:55:14.216712  115217 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 19:55:14.229800  115217 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1206 19:55:14.229832  115217 kubeadm.go:636] restartCluster start
	I1206 19:55:14.229889  115217 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 19:55:14.242347  115217 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:14.243973  115217 kubeconfig.go:92] found "old-k8s-version-448851" server: "https://192.168.61.33:8443"
	I1206 19:55:14.247781  115217 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 19:55:14.257060  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:14.257118  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:14.268619  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:14.268644  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:14.268692  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:14.279803  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:14.780509  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:14.780603  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:14.796116  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:15.280797  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:15.280910  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:15.296260  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:15.779895  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:15.780023  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:15.796115  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:16.280792  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:16.280884  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:16.297258  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:16.780884  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:16.781007  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:16.796430  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:17.279982  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:17.280088  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:17.291102  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:17.780721  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:17.780865  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:17.792253  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:19.302288  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:19.302717  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | unable to find current IP address of domain default-k8s-diff-port-380424 in network mk-default-k8s-diff-port-380424
	I1206 19:55:19.302744  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | I1206 19:55:19.302670  116315 retry.go:31] will retry after 3.226665023s: waiting for machine to come up
	I1206 19:55:18.280684  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:18.280777  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:18.292535  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:18.780650  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:18.780722  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:18.793872  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:19.280431  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:19.280507  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:19.292188  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:19.780793  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:19.780914  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:19.791873  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:20.280527  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:20.280637  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:20.291886  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:20.780810  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:20.780890  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:20.791837  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:21.280389  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:21.280479  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:21.291743  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:21.780252  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:21.780343  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:21.791452  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:22.280013  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:22.280120  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:22.291240  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:22.780451  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:22.780528  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:22.791668  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:23.690245  115591 start.go:369] acquired machines lock for "embed-certs-209025" in 4m34.06740814s
	I1206 19:55:23.690318  115591 start.go:96] Skipping create...Using existing machine configuration
	I1206 19:55:23.690327  115591 fix.go:54] fixHost starting: 
	I1206 19:55:23.690686  115591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:55:23.690728  115591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:55:23.706483  115591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35135
	I1206 19:55:23.706891  115591 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:55:23.707367  115591 main.go:141] libmachine: Using API Version  1
	I1206 19:55:23.707391  115591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:55:23.707744  115591 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:55:23.707925  115591 main.go:141] libmachine: (embed-certs-209025) Calling .DriverName
	I1206 19:55:23.708059  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetState
	I1206 19:55:23.709586  115591 fix.go:102] recreateIfNeeded on embed-certs-209025: state=Stopped err=<nil>
	I1206 19:55:23.709612  115591 main.go:141] libmachine: (embed-certs-209025) Calling .DriverName
	W1206 19:55:23.709803  115591 fix.go:128] unexpected machine state, will restart: <nil>
	I1206 19:55:23.712015  115591 out.go:177] * Restarting existing kvm2 VM for "embed-certs-209025" ...
	I1206 19:55:23.713472  115591 main.go:141] libmachine: (embed-certs-209025) Calling .Start
	I1206 19:55:23.713637  115591 main.go:141] libmachine: (embed-certs-209025) Ensuring networks are active...
	I1206 19:55:23.714335  115591 main.go:141] libmachine: (embed-certs-209025) Ensuring network default is active
	I1206 19:55:23.714639  115591 main.go:141] libmachine: (embed-certs-209025) Ensuring network mk-embed-certs-209025 is active
	I1206 19:55:23.714978  115591 main.go:141] libmachine: (embed-certs-209025) Getting domain xml...
	I1206 19:55:23.715617  115591 main.go:141] libmachine: (embed-certs-209025) Creating domain...
	I1206 19:55:22.530618  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.531092  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has current primary IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.531107  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Found IP for machine: 192.168.72.22
	I1206 19:55:22.531117  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Reserving static IP address...
	I1206 19:55:22.531437  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-380424", mac: "52:54:00:15:24:2b", ip: "192.168.72.22"} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:22.531465  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | skip adding static IP to network mk-default-k8s-diff-port-380424 - found existing host DHCP lease matching {name: "default-k8s-diff-port-380424", mac: "52:54:00:15:24:2b", ip: "192.168.72.22"}
	I1206 19:55:22.531485  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | Getting to WaitForSSH function...
	I1206 19:55:22.531496  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Reserved static IP address: 192.168.72.22
	I1206 19:55:22.531554  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for SSH to be available...
	I1206 19:55:22.533485  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.533729  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:22.533752  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.533853  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | Using SSH client type: external
	I1206 19:55:22.533880  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | Using SSH private key: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/default-k8s-diff-port-380424/id_rsa (-rw-------)
	I1206 19:55:22.533916  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.22 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17740-63652/.minikube/machines/default-k8s-diff-port-380424/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1206 19:55:22.533941  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | About to run SSH command:
	I1206 19:55:22.533957  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | exit 0
	I1206 19:55:22.620864  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | SSH cmd err, output: <nil>: 
	I1206 19:55:22.621194  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetConfigRaw
	I1206 19:55:22.621844  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetIP
	I1206 19:55:22.624194  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.624565  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:22.624599  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.624876  115497 profile.go:148] Saving config to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/default-k8s-diff-port-380424/config.json ...
	I1206 19:55:22.625062  115497 machine.go:88] provisioning docker machine ...
	I1206 19:55:22.625081  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .DriverName
	I1206 19:55:22.625310  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetMachineName
	I1206 19:55:22.625481  115497 buildroot.go:166] provisioning hostname "default-k8s-diff-port-380424"
	I1206 19:55:22.625502  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetMachineName
	I1206 19:55:22.625635  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHHostname
	I1206 19:55:22.627886  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.628227  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:22.628255  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.628352  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHPort
	I1206 19:55:22.628499  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 19:55:22.628658  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 19:55:22.628784  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHUsername
	I1206 19:55:22.628940  115497 main.go:141] libmachine: Using SSH client type: native
	I1206 19:55:22.629440  115497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.22 22 <nil> <nil>}
	I1206 19:55:22.629462  115497 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-380424 && echo "default-k8s-diff-port-380424" | sudo tee /etc/hostname
	I1206 19:55:22.753829  115497 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-380424
	
	I1206 19:55:22.753867  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHHostname
	I1206 19:55:22.756620  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.756958  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:22.757001  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.757129  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHPort
	I1206 19:55:22.757375  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 19:55:22.757558  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 19:55:22.757700  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHUsername
	I1206 19:55:22.757868  115497 main.go:141] libmachine: Using SSH client type: native
	I1206 19:55:22.758197  115497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.22 22 <nil> <nil>}
	I1206 19:55:22.758252  115497 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-380424' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-380424/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-380424' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 19:55:22.878138  115497 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1206 19:55:22.878175  115497 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17740-63652/.minikube CaCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17740-63652/.minikube}
	I1206 19:55:22.878202  115497 buildroot.go:174] setting up certificates
	I1206 19:55:22.878246  115497 provision.go:83] configureAuth start
	I1206 19:55:22.878259  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetMachineName
	I1206 19:55:22.878557  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetIP
	I1206 19:55:22.881145  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.881515  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:22.881547  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.881657  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHHostname
	I1206 19:55:22.883591  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.883943  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:22.883981  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.884062  115497 provision.go:138] copyHostCerts
	I1206 19:55:22.884122  115497 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem, removing ...
	I1206 19:55:22.884135  115497 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem
	I1206 19:55:22.884203  115497 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem (1082 bytes)
	I1206 19:55:22.884334  115497 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem, removing ...
	I1206 19:55:22.884346  115497 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem
	I1206 19:55:22.884375  115497 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem (1123 bytes)
	I1206 19:55:22.884446  115497 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem, removing ...
	I1206 19:55:22.884457  115497 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem
	I1206 19:55:22.884484  115497 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem (1679 bytes)
	I1206 19:55:22.884539  115497 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-380424 san=[192.168.72.22 192.168.72.22 localhost 127.0.0.1 minikube default-k8s-diff-port-380424]
	I1206 19:55:22.973559  115497 provision.go:172] copyRemoteCerts
	I1206 19:55:22.973627  115497 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 19:55:22.973660  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHHostname
	I1206 19:55:22.976374  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.976656  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:22.976695  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.976888  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHPort
	I1206 19:55:22.977068  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 19:55:22.977300  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHUsername
	I1206 19:55:22.977468  115497 sshutil.go:53] new ssh client: &{IP:192.168.72.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/default-k8s-diff-port-380424/id_rsa Username:docker}
	I1206 19:55:23.061925  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 19:55:23.085093  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1206 19:55:23.108283  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 19:55:23.131666  115497 provision.go:86] duration metric: configureAuth took 253.404471ms
	I1206 19:55:23.131701  115497 buildroot.go:189] setting minikube options for container-runtime
	I1206 19:55:23.131879  115497 config.go:182] Loaded profile config "default-k8s-diff-port-380424": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 19:55:23.131957  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHHostname
	I1206 19:55:23.134672  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:23.135033  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:23.135077  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:23.135214  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHPort
	I1206 19:55:23.135436  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 19:55:23.135622  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 19:55:23.135781  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHUsername
	I1206 19:55:23.135941  115497 main.go:141] libmachine: Using SSH client type: native
	I1206 19:55:23.136393  115497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.22 22 <nil> <nil>}
	I1206 19:55:23.136427  115497 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 19:55:23.445361  115497 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 19:55:23.445389  115497 machine.go:91] provisioned docker machine in 820.312346ms
	I1206 19:55:23.445404  115497 start.go:300] post-start starting for "default-k8s-diff-port-380424" (driver="kvm2")
	I1206 19:55:23.445418  115497 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 19:55:23.445457  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .DriverName
	I1206 19:55:23.445851  115497 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 19:55:23.445886  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHHostname
	I1206 19:55:23.448493  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:23.448851  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:23.448879  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:23.449021  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHPort
	I1206 19:55:23.449210  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 19:55:23.449408  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHUsername
	I1206 19:55:23.449562  115497 sshutil.go:53] new ssh client: &{IP:192.168.72.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/default-k8s-diff-port-380424/id_rsa Username:docker}
	I1206 19:55:23.535493  115497 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 19:55:23.539696  115497 info.go:137] Remote host: Buildroot 2021.02.12
	I1206 19:55:23.539718  115497 filesync.go:126] Scanning /home/jenkins/minikube-integration/17740-63652/.minikube/addons for local assets ...
	I1206 19:55:23.539780  115497 filesync.go:126] Scanning /home/jenkins/minikube-integration/17740-63652/.minikube/files for local assets ...
	I1206 19:55:23.539862  115497 filesync.go:149] local asset: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem -> 708342.pem in /etc/ssl/certs
	I1206 19:55:23.539968  115497 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 19:55:23.548629  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem --> /etc/ssl/certs/708342.pem (1708 bytes)
	I1206 19:55:23.572264  115497 start.go:303] post-start completed in 126.842848ms
	I1206 19:55:23.572287  115497 fix.go:56] fixHost completed within 19.221864403s
	I1206 19:55:23.572321  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHHostname
	I1206 19:55:23.575329  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:23.575695  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:23.575739  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:23.575890  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHPort
	I1206 19:55:23.576093  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 19:55:23.576272  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 19:55:23.576429  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHUsername
	I1206 19:55:23.576599  115497 main.go:141] libmachine: Using SSH client type: native
	I1206 19:55:23.577046  115497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.22 22 <nil> <nil>}
	I1206 19:55:23.577064  115497 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1206 19:55:23.690035  115497 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701892523.637580982
	
	I1206 19:55:23.690064  115497 fix.go:206] guest clock: 1701892523.637580982
	I1206 19:55:23.690084  115497 fix.go:219] Guest: 2023-12-06 19:55:23.637580982 +0000 UTC Remote: 2023-12-06 19:55:23.572291664 +0000 UTC m=+277.181979500 (delta=65.289318ms)
	I1206 19:55:23.690146  115497 fix.go:190] guest clock delta is within tolerance: 65.289318ms
	I1206 19:55:23.690159  115497 start.go:83] releasing machines lock for "default-k8s-diff-port-380424", held for 19.339778523s
	I1206 19:55:23.690192  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .DriverName
	I1206 19:55:23.690465  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetIP
	I1206 19:55:23.692996  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:23.693337  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:23.693369  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:23.693562  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .DriverName
	I1206 19:55:23.694057  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .DriverName
	I1206 19:55:23.694250  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .DriverName
	I1206 19:55:23.694336  115497 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 19:55:23.694390  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHHostname
	I1206 19:55:23.694463  115497 ssh_runner.go:195] Run: cat /version.json
	I1206 19:55:23.694486  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHHostname
	I1206 19:55:23.696938  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:23.697063  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:23.697363  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:23.697393  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:23.697473  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:23.697514  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHPort
	I1206 19:55:23.697593  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:23.697674  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 19:55:23.697675  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHPort
	I1206 19:55:23.697876  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHUsername
	I1206 19:55:23.697899  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 19:55:23.698044  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHUsername
	I1206 19:55:23.698038  115497 sshutil.go:53] new ssh client: &{IP:192.168.72.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/default-k8s-diff-port-380424/id_rsa Username:docker}
	I1206 19:55:23.698167  115497 sshutil.go:53] new ssh client: &{IP:192.168.72.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/default-k8s-diff-port-380424/id_rsa Username:docker}
	I1206 19:55:23.786973  115497 ssh_runner.go:195] Run: systemctl --version
	I1206 19:55:23.814262  115497 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 19:55:23.954235  115497 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 19:55:23.961434  115497 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 19:55:23.961523  115497 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 19:55:23.981459  115497 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1206 19:55:23.981488  115497 start.go:475] detecting cgroup driver to use...
	I1206 19:55:23.981550  115497 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 19:55:24.000294  115497 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 19:55:24.013738  115497 docker.go:203] disabling cri-docker service (if available) ...
	I1206 19:55:24.013799  115497 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 19:55:24.030844  115497 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 19:55:24.044583  115497 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 19:55:24.161979  115497 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 19:55:24.296507  115497 docker.go:219] disabling docker service ...
	I1206 19:55:24.296580  115497 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 19:55:24.311171  115497 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 19:55:24.323538  115497 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 19:55:24.440425  115497 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 19:55:24.570168  115497 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 19:55:24.583169  115497 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 19:55:24.600733  115497 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1206 19:55:24.600790  115497 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:55:24.610057  115497 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 19:55:24.610129  115497 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:55:24.621925  115497 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:55:24.631383  115497 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:55:24.640414  115497 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 19:55:24.649853  115497 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 19:55:24.657999  115497 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1206 19:55:24.658052  115497 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1206 19:55:24.672821  115497 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 19:55:24.681200  115497 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 19:55:24.812790  115497 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 19:55:24.989383  115497 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 19:55:24.989483  115497 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 19:55:24.995335  115497 start.go:543] Will wait 60s for crictl version
	I1206 19:55:24.995404  115497 ssh_runner.go:195] Run: which crictl
	I1206 19:55:24.999307  115497 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1206 19:55:25.038932  115497 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1206 19:55:25.039046  115497 ssh_runner.go:195] Run: crio --version
	I1206 19:55:25.085844  115497 ssh_runner.go:195] Run: crio --version
	I1206 19:55:25.148264  115497 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1206 19:55:25.149676  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetIP
	I1206 19:55:25.152759  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:25.153157  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:25.153201  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:25.153451  115497 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1206 19:55:25.157621  115497 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 19:55:25.173609  115497 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1206 19:55:25.173680  115497 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 19:55:25.223564  115497 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1206 19:55:25.223647  115497 ssh_runner.go:195] Run: which lz4
	I1206 19:55:25.228720  115497 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1206 19:55:25.234028  115497 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1206 19:55:25.234061  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1206 19:55:23.280317  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:23.280398  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:23.291959  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:23.780005  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:23.780086  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:23.794371  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:24.257148  115217 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1206 19:55:24.257182  115217 kubeadm.go:1135] stopping kube-system containers ...
	I1206 19:55:24.257196  115217 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1206 19:55:24.257291  115217 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 19:55:24.300759  115217 cri.go:89] found id: ""
	I1206 19:55:24.300832  115217 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1206 19:55:24.319509  115217 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 19:55:24.329215  115217 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 19:55:24.329310  115217 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 19:55:24.338150  115217 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1206 19:55:24.338187  115217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:55:24.490031  115217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:55:25.123737  115217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:55:25.359750  115217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:55:25.550542  115217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:55:25.697003  115217 api_server.go:52] waiting for apiserver process to appear ...
	I1206 19:55:25.697091  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:55:25.713836  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:55:26.231509  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:55:26.730965  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:55:27.231602  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:55:27.731612  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:55:27.763155  115217 api_server.go:72] duration metric: took 2.066152846s to wait for apiserver process to appear ...
	I1206 19:55:27.763181  115217 api_server.go:88] waiting for apiserver healthz status ...
	I1206 19:55:27.763200  115217 api_server.go:253] Checking apiserver healthz at https://192.168.61.33:8443/healthz ...
	I1206 19:55:25.055509  115591 main.go:141] libmachine: (embed-certs-209025) Waiting to get IP...
	I1206 19:55:25.056687  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:25.057138  115591 main.go:141] libmachine: (embed-certs-209025) DBG | unable to find current IP address of domain embed-certs-209025 in network mk-embed-certs-209025
	I1206 19:55:25.057192  115591 main.go:141] libmachine: (embed-certs-209025) DBG | I1206 19:55:25.057100  116938 retry.go:31] will retry after 304.168381ms: waiting for machine to come up
	I1206 19:55:25.363765  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:25.364265  115591 main.go:141] libmachine: (embed-certs-209025) DBG | unable to find current IP address of domain embed-certs-209025 in network mk-embed-certs-209025
	I1206 19:55:25.364404  115591 main.go:141] libmachine: (embed-certs-209025) DBG | I1206 19:55:25.364341  116938 retry.go:31] will retry after 351.729741ms: waiting for machine to come up
	I1206 19:55:25.718184  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:25.718746  115591 main.go:141] libmachine: (embed-certs-209025) DBG | unable to find current IP address of domain embed-certs-209025 in network mk-embed-certs-209025
	I1206 19:55:25.718774  115591 main.go:141] libmachine: (embed-certs-209025) DBG | I1206 19:55:25.718650  116938 retry.go:31] will retry after 340.321802ms: waiting for machine to come up
	I1206 19:55:26.060168  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:26.060796  115591 main.go:141] libmachine: (embed-certs-209025) DBG | unable to find current IP address of domain embed-certs-209025 in network mk-embed-certs-209025
	I1206 19:55:26.060843  115591 main.go:141] libmachine: (embed-certs-209025) DBG | I1206 19:55:26.060725  116938 retry.go:31] will retry after 422.434651ms: waiting for machine to come up
	I1206 19:55:26.484420  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:26.484967  115591 main.go:141] libmachine: (embed-certs-209025) DBG | unable to find current IP address of domain embed-certs-209025 in network mk-embed-certs-209025
	I1206 19:55:26.485003  115591 main.go:141] libmachine: (embed-certs-209025) DBG | I1206 19:55:26.484931  116938 retry.go:31] will retry after 584.854153ms: waiting for machine to come up
	I1206 19:55:27.071766  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:27.072298  115591 main.go:141] libmachine: (embed-certs-209025) DBG | unable to find current IP address of domain embed-certs-209025 in network mk-embed-certs-209025
	I1206 19:55:27.072325  115591 main.go:141] libmachine: (embed-certs-209025) DBG | I1206 19:55:27.072233  116938 retry.go:31] will retry after 710.482528ms: waiting for machine to come up
	I1206 19:55:27.784162  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:27.784660  115591 main.go:141] libmachine: (embed-certs-209025) DBG | unable to find current IP address of domain embed-certs-209025 in network mk-embed-certs-209025
	I1206 19:55:27.784695  115591 main.go:141] libmachine: (embed-certs-209025) DBG | I1206 19:55:27.784560  116938 retry.go:31] will retry after 754.279817ms: waiting for machine to come up
	I1206 19:55:28.540261  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:28.540788  115591 main.go:141] libmachine: (embed-certs-209025) DBG | unable to find current IP address of domain embed-certs-209025 in network mk-embed-certs-209025
	I1206 19:55:28.540818  115591 main.go:141] libmachine: (embed-certs-209025) DBG | I1206 19:55:28.540728  116938 retry.go:31] will retry after 1.359726156s: waiting for machine to come up
	I1206 19:55:27.194696  115497 crio.go:444] Took 1.966010 seconds to copy over tarball
	I1206 19:55:27.194774  115497 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1206 19:55:30.501183  115497 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.306375512s)
	I1206 19:55:30.501222  115497 crio.go:451] Took 3.306493 seconds to extract the tarball
	I1206 19:55:30.501249  115497 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1206 19:55:30.542574  115497 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 19:55:30.587381  115497 crio.go:496] all images are preloaded for cri-o runtime.
	I1206 19:55:30.587405  115497 cache_images.go:84] Images are preloaded, skipping loading
	I1206 19:55:30.587483  115497 ssh_runner.go:195] Run: crio config
	I1206 19:55:30.649117  115497 cni.go:84] Creating CNI manager for ""
	I1206 19:55:30.649140  115497 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 19:55:30.649163  115497 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1206 19:55:30.649191  115497 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.22 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-380424 NodeName:default-k8s-diff-port-380424 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.22"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.22 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 19:55:30.649383  115497 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.22
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-380424"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.22
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.22"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 19:55:30.649487  115497 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-380424 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.22
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-380424 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1206 19:55:30.649561  115497 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1206 19:55:30.659186  115497 binaries.go:44] Found k8s binaries, skipping transfer
	I1206 19:55:30.659297  115497 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 19:55:30.668534  115497 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (387 bytes)
	I1206 19:55:30.684815  115497 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 19:55:30.701801  115497 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2112 bytes)
	I1206 19:55:30.721756  115497 ssh_runner.go:195] Run: grep 192.168.72.22	control-plane.minikube.internal$ /etc/hosts
	I1206 19:55:30.726656  115497 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.22	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 19:55:30.738943  115497 certs.go:56] Setting up /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/default-k8s-diff-port-380424 for IP: 192.168.72.22
	I1206 19:55:30.738981  115497 certs.go:190] acquiring lock for shared ca certs: {Name:mkf8fbf7b590617ef4dc6c3a4acb742ae26f89ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 19:55:30.739159  115497 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.key
	I1206 19:55:30.739219  115497 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.key
	I1206 19:55:30.739322  115497 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/default-k8s-diff-port-380424/client.key
	I1206 19:55:30.739426  115497 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/default-k8s-diff-port-380424/apiserver.key.99d663cb
	I1206 19:55:30.739489  115497 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/default-k8s-diff-port-380424/proxy-client.key
	I1206 19:55:30.739629  115497 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834.pem (1338 bytes)
	W1206 19:55:30.739672  115497 certs.go:433] ignoring /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834_empty.pem, impossibly tiny 0 bytes
	I1206 19:55:30.739689  115497 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 19:55:30.739726  115497 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem (1082 bytes)
	I1206 19:55:30.739762  115497 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem (1123 bytes)
	I1206 19:55:30.739801  115497 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem (1679 bytes)
	I1206 19:55:30.739872  115497 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem (1708 bytes)
	I1206 19:55:30.740532  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/default-k8s-diff-port-380424/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1206 19:55:30.766689  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/default-k8s-diff-port-380424/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 19:55:30.792892  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/default-k8s-diff-port-380424/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 19:55:30.817640  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/default-k8s-diff-port-380424/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 19:55:30.842916  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 19:55:30.868057  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 19:55:30.893993  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 19:55:30.924631  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 19:55:30.953503  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem --> /usr/share/ca-certificates/708342.pem (1708 bytes)
	I1206 19:55:30.980162  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 19:55:31.007247  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834.pem --> /usr/share/ca-certificates/70834.pem (1338 bytes)
	I1206 19:55:31.034274  115497 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 19:55:31.054544  115497 ssh_runner.go:195] Run: openssl version
	I1206 19:55:31.062053  115497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1206 19:55:31.077159  115497 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:55:31.083640  115497 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  6 18:41 /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:55:31.083707  115497 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:55:31.091093  115497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1206 19:55:31.105305  115497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/70834.pem && ln -fs /usr/share/ca-certificates/70834.pem /etc/ssl/certs/70834.pem"
	I1206 19:55:31.117767  115497 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/70834.pem
	I1206 19:55:31.123703  115497 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  6 18:50 /usr/share/ca-certificates/70834.pem
	I1206 19:55:31.123798  115497 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/70834.pem
	I1206 19:55:31.131531  115497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/70834.pem /etc/ssl/certs/51391683.0"
	I1206 19:55:31.142449  115497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/708342.pem && ln -fs /usr/share/ca-certificates/708342.pem /etc/ssl/certs/708342.pem"
	I1206 19:55:31.157311  115497 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/708342.pem
	I1206 19:55:31.163707  115497 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  6 18:50 /usr/share/ca-certificates/708342.pem
	I1206 19:55:31.163783  115497 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/708342.pem
	I1206 19:55:31.170831  115497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/708342.pem /etc/ssl/certs/3ec20f2e.0"
	I1206 19:55:31.183300  115497 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1206 19:55:31.188165  115497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 19:55:31.194562  115497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 19:55:31.201769  115497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 19:55:31.209562  115497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 19:55:31.217346  115497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 19:55:31.225522  115497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 19:55:31.233755  115497 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-380424 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-380424 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.22 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extra
Disks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 19:55:31.233889  115497 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 19:55:31.233952  115497 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 19:55:31.278891  115497 cri.go:89] found id: ""
	I1206 19:55:31.278972  115497 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 19:55:31.291971  115497 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1206 19:55:31.291999  115497 kubeadm.go:636] restartCluster start
	I1206 19:55:31.292070  115497 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 19:55:31.304934  115497 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:31.306156  115497 kubeconfig.go:92] found "default-k8s-diff-port-380424" server: "https://192.168.72.22:8444"
	I1206 19:55:31.308710  115497 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 19:55:31.321910  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:31.321976  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:31.339075  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:31.339096  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:31.339143  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:31.354436  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:32.765826  115217 api_server.go:269] stopped: https://192.168.61.33:8443/healthz: Get "https://192.168.61.33:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1206 19:55:32.765895  115217 api_server.go:253] Checking apiserver healthz at https://192.168.61.33:8443/healthz ...
	I1206 19:55:29.902670  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:29.903123  115591 main.go:141] libmachine: (embed-certs-209025) DBG | unable to find current IP address of domain embed-certs-209025 in network mk-embed-certs-209025
	I1206 19:55:29.903152  115591 main.go:141] libmachine: (embed-certs-209025) DBG | I1206 19:55:29.903081  116938 retry.go:31] will retry after 1.188380941s: waiting for machine to come up
	I1206 19:55:31.092707  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:31.093278  115591 main.go:141] libmachine: (embed-certs-209025) DBG | unable to find current IP address of domain embed-certs-209025 in network mk-embed-certs-209025
	I1206 19:55:31.093311  115591 main.go:141] libmachine: (embed-certs-209025) DBG | I1206 19:55:31.093245  116938 retry.go:31] will retry after 1.854046475s: waiting for machine to come up
	I1206 19:55:32.948423  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:32.948866  115591 main.go:141] libmachine: (embed-certs-209025) DBG | unable to find current IP address of domain embed-certs-209025 in network mk-embed-certs-209025
	I1206 19:55:32.948891  115591 main.go:141] libmachine: (embed-certs-209025) DBG | I1206 19:55:32.948827  116938 retry.go:31] will retry after 2.868825903s: waiting for machine to come up
	I1206 19:55:34.066100  115217 api_server.go:279] https://192.168.61.33:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1206 19:55:34.066146  115217 api_server.go:103] status: https://192.168.61.33:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1206 19:55:34.566904  115217 api_server.go:253] Checking apiserver healthz at https://192.168.61.33:8443/healthz ...
	I1206 19:55:34.573643  115217 api_server.go:279] https://192.168.61.33:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1206 19:55:34.573675  115217 api_server.go:103] status: https://192.168.61.33:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1206 19:55:35.066235  115217 api_server.go:253] Checking apiserver healthz at https://192.168.61.33:8443/healthz ...
	I1206 19:55:35.076927  115217 api_server.go:279] https://192.168.61.33:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1206 19:55:35.076966  115217 api_server.go:103] status: https://192.168.61.33:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1206 19:55:35.566361  115217 api_server.go:253] Checking apiserver healthz at https://192.168.61.33:8443/healthz ...
	I1206 19:55:35.574853  115217 api_server.go:279] https://192.168.61.33:8443/healthz returned 200:
	ok
	I1206 19:55:35.585855  115217 api_server.go:141] control plane version: v1.16.0
	I1206 19:55:35.585895  115217 api_server.go:131] duration metric: took 7.822706447s to wait for apiserver health ...
	I1206 19:55:35.585908  115217 cni.go:84] Creating CNI manager for ""
	I1206 19:55:35.585917  115217 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 19:55:35.587984  115217 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1206 19:55:31.855148  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:31.855275  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:31.867628  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:32.355238  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:32.355330  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:32.368154  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:32.854710  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:32.854820  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:32.870926  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:33.355493  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:33.355586  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:33.371984  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:33.854511  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:33.854604  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:33.871260  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:34.354793  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:34.354897  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:34.371333  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:34.855487  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:34.855575  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:34.868348  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:35.354949  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:35.355026  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:35.367357  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:35.854910  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:35.855003  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:35.871382  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:36.354908  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:36.355047  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:36.371112  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:35.589529  115217 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1206 19:55:35.599454  115217 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1206 19:55:35.616803  115217 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 19:55:35.626793  115217 system_pods.go:59] 7 kube-system pods found
	I1206 19:55:35.626829  115217 system_pods.go:61] "coredns-5644d7b6d9-nrtk9" [447f7434-3f97-4e3f-9451-d9a54bff7ba1] Running
	I1206 19:55:35.626837  115217 system_pods.go:61] "etcd-old-k8s-version-448851" [77c1f822-788f-4f28-8f8e-54278d5d9e10] Running
	I1206 19:55:35.626843  115217 system_pods.go:61] "kube-apiserver-old-k8s-version-448851" [d3cf3d55-8862-4f81-ac61-99b202469859] Running
	I1206 19:55:35.626851  115217 system_pods.go:61] "kube-controller-manager-old-k8s-version-448851" [58ffb9bc-b5a3-4c64-a78f-da0011e6c277] Running
	I1206 19:55:35.626869  115217 system_pods.go:61] "kube-proxy-sw4qv" [6c08ab4a-447b-42e9-a617-ac35d66cf4ea] Running
	I1206 19:55:35.626879  115217 system_pods.go:61] "kube-scheduler-old-k8s-version-448851" [378ead75-3fd6-4cfd-a063-f2afc3a1cd12] Running
	I1206 19:55:35.626886  115217 system_pods.go:61] "storage-provisioner" [cce901c3-37d9-4ae2-ab9c-99bb7fda6d23] Running
	I1206 19:55:35.626901  115217 system_pods.go:74] duration metric: took 10.069819ms to wait for pod list to return data ...
	I1206 19:55:35.626910  115217 node_conditions.go:102] verifying NodePressure condition ...
	I1206 19:55:35.632164  115217 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1206 19:55:35.632240  115217 node_conditions.go:123] node cpu capacity is 2
	I1206 19:55:35.632256  115217 node_conditions.go:105] duration metric: took 5.340532ms to run NodePressure ...
	I1206 19:55:35.632282  115217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:55:35.925990  115217 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1206 19:55:35.935849  115217 retry.go:31] will retry after 256.122518ms: kubelet not initialised
	I1206 19:55:36.197872  115217 retry.go:31] will retry after 337.717759ms: kubelet not initialised
	I1206 19:55:36.541368  115217 retry.go:31] will retry after 784.037462ms: kubelet not initialised
	I1206 19:55:37.331284  115217 retry.go:31] will retry after 921.381118ms: kubelet not initialised
	I1206 19:55:35.819131  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:35.819759  115591 main.go:141] libmachine: (embed-certs-209025) DBG | unable to find current IP address of domain embed-certs-209025 in network mk-embed-certs-209025
	I1206 19:55:35.819793  115591 main.go:141] libmachine: (embed-certs-209025) DBG | I1206 19:55:35.819698  116938 retry.go:31] will retry after 2.281000862s: waiting for machine to come up
	I1206 19:55:38.103281  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:38.103807  115591 main.go:141] libmachine: (embed-certs-209025) DBG | unable to find current IP address of domain embed-certs-209025 in network mk-embed-certs-209025
	I1206 19:55:38.103845  115591 main.go:141] libmachine: (embed-certs-209025) DBG | I1206 19:55:38.103736  116938 retry.go:31] will retry after 3.076134377s: waiting for machine to come up
	I1206 19:55:36.855191  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:36.855309  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:36.872110  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:37.354562  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:37.354682  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:37.370156  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:37.854600  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:37.854726  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:37.870621  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:38.355289  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:38.355391  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:38.368595  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:38.855116  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:38.855218  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:38.868455  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:39.354955  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:39.355048  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:39.368875  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:39.854833  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:39.854928  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:39.866765  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:40.354989  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:40.355106  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:40.367526  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:40.854791  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:40.854873  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:40.866579  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:41.322422  115497 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1206 19:55:41.322456  115497 kubeadm.go:1135] stopping kube-system containers ...
	I1206 19:55:41.322472  115497 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1206 19:55:41.322548  115497 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 19:55:41.360234  115497 cri.go:89] found id: ""
	I1206 19:55:41.360301  115497 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1206 19:55:41.376968  115497 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 19:55:41.387639  115497 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 19:55:41.387694  115497 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 19:55:41.397586  115497 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1206 19:55:41.397617  115497 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:55:38.258758  115217 retry.go:31] will retry after 961.817778ms: kubelet not initialised
	I1206 19:55:39.225505  115217 retry.go:31] will retry after 1.751905914s: kubelet not initialised
	I1206 19:55:40.982344  115217 retry.go:31] will retry after 1.649102014s: kubelet not initialised
	I1206 19:55:42.639410  115217 retry.go:31] will retry after 3.317462401s: kubelet not initialised
	I1206 19:55:41.182443  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:41.182893  115591 main.go:141] libmachine: (embed-certs-209025) DBG | unable to find current IP address of domain embed-certs-209025 in network mk-embed-certs-209025
	I1206 19:55:41.182930  115591 main.go:141] libmachine: (embed-certs-209025) DBG | I1206 19:55:41.182837  116938 retry.go:31] will retry after 4.029797575s: waiting for machine to come up
	I1206 19:55:41.519134  115497 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:55:42.404075  115497 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:55:42.613308  115497 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:55:42.707533  115497 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:55:42.796041  115497 api_server.go:52] waiting for apiserver process to appear ...
	I1206 19:55:42.796139  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:55:42.816782  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:55:43.336582  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:55:43.836183  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:55:44.336879  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:55:44.836718  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:55:45.336249  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:55:45.363947  115497 api_server.go:72] duration metric: took 2.567911355s to wait for apiserver process to appear ...
	I1206 19:55:45.363968  115497 api_server.go:88] waiting for apiserver healthz status ...
	I1206 19:55:45.363984  115497 api_server.go:253] Checking apiserver healthz at https://192.168.72.22:8444/healthz ...
	I1206 19:55:46.486502  115078 start.go:369] acquired machines lock for "no-preload-989559" in 57.98684139s
	I1206 19:55:46.486560  115078 start.go:96] Skipping create...Using existing machine configuration
	I1206 19:55:46.486570  115078 fix.go:54] fixHost starting: 
	I1206 19:55:46.487006  115078 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:55:46.487052  115078 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:55:46.506170  115078 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32893
	I1206 19:55:46.506576  115078 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:55:46.507081  115078 main.go:141] libmachine: Using API Version  1
	I1206 19:55:46.507110  115078 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:55:46.507412  115078 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:55:46.507600  115078 main.go:141] libmachine: (no-preload-989559) Calling .DriverName
	I1206 19:55:46.508110  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetState
	I1206 19:55:46.509817  115078 fix.go:102] recreateIfNeeded on no-preload-989559: state=Stopped err=<nil>
	I1206 19:55:46.509843  115078 main.go:141] libmachine: (no-preload-989559) Calling .DriverName
	W1206 19:55:46.509988  115078 fix.go:128] unexpected machine state, will restart: <nil>
	I1206 19:55:46.512103  115078 out.go:177] * Restarting existing kvm2 VM for "no-preload-989559" ...
	I1206 19:55:45.214823  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.215271  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has current primary IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.215293  115591 main.go:141] libmachine: (embed-certs-209025) Found IP for machine: 192.168.50.164
	I1206 19:55:45.215341  115591 main.go:141] libmachine: (embed-certs-209025) Reserving static IP address...
	I1206 19:55:45.215738  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "embed-certs-209025", mac: "52:54:00:4d:27:5b", ip: "192.168.50.164"} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:45.215772  115591 main.go:141] libmachine: (embed-certs-209025) DBG | skip adding static IP to network mk-embed-certs-209025 - found existing host DHCP lease matching {name: "embed-certs-209025", mac: "52:54:00:4d:27:5b", ip: "192.168.50.164"}
	I1206 19:55:45.215787  115591 main.go:141] libmachine: (embed-certs-209025) Reserved static IP address: 192.168.50.164
	I1206 19:55:45.215805  115591 main.go:141] libmachine: (embed-certs-209025) Waiting for SSH to be available...
	I1206 19:55:45.215821  115591 main.go:141] libmachine: (embed-certs-209025) DBG | Getting to WaitForSSH function...
	I1206 19:55:45.217850  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.218192  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:45.218219  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.218370  115591 main.go:141] libmachine: (embed-certs-209025) DBG | Using SSH client type: external
	I1206 19:55:45.218404  115591 main.go:141] libmachine: (embed-certs-209025) DBG | Using SSH private key: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/embed-certs-209025/id_rsa (-rw-------)
	I1206 19:55:45.218438  115591 main.go:141] libmachine: (embed-certs-209025) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.164 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17740-63652/.minikube/machines/embed-certs-209025/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1206 19:55:45.218452  115591 main.go:141] libmachine: (embed-certs-209025) DBG | About to run SSH command:
	I1206 19:55:45.218475  115591 main.go:141] libmachine: (embed-certs-209025) DBG | exit 0
	I1206 19:55:45.309353  115591 main.go:141] libmachine: (embed-certs-209025) DBG | SSH cmd err, output: <nil>: 
	I1206 19:55:45.309758  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetConfigRaw
	I1206 19:55:45.310547  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetIP
	I1206 19:55:45.313019  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.313334  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:45.313369  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.313638  115591 profile.go:148] Saving config to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/embed-certs-209025/config.json ...
	I1206 19:55:45.313844  115591 machine.go:88] provisioning docker machine ...
	I1206 19:55:45.313870  115591 main.go:141] libmachine: (embed-certs-209025) Calling .DriverName
	I1206 19:55:45.314081  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetMachineName
	I1206 19:55:45.314264  115591 buildroot.go:166] provisioning hostname "embed-certs-209025"
	I1206 19:55:45.314298  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetMachineName
	I1206 19:55:45.314509  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHHostname
	I1206 19:55:45.316952  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.317361  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:45.317395  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.317640  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHPort
	I1206 19:55:45.317821  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 19:55:45.317954  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 19:55:45.318079  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHUsername
	I1206 19:55:45.318235  115591 main.go:141] libmachine: Using SSH client type: native
	I1206 19:55:45.318665  115591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.164 22 <nil> <nil>}
	I1206 19:55:45.318683  115591 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-209025 && echo "embed-certs-209025" | sudo tee /etc/hostname
	I1206 19:55:45.459071  115591 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-209025
	
	I1206 19:55:45.459107  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHHostname
	I1206 19:55:45.461953  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.462334  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:45.462362  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.462592  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHPort
	I1206 19:55:45.462814  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 19:55:45.463010  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 19:55:45.463162  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHUsername
	I1206 19:55:45.463353  115591 main.go:141] libmachine: Using SSH client type: native
	I1206 19:55:45.463887  115591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.164 22 <nil> <nil>}
	I1206 19:55:45.463916  115591 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-209025' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-209025/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-209025' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 19:55:45.597186  115591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1206 19:55:45.597220  115591 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17740-63652/.minikube CaCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17740-63652/.minikube}
	I1206 19:55:45.597253  115591 buildroot.go:174] setting up certificates
	I1206 19:55:45.597270  115591 provision.go:83] configureAuth start
	I1206 19:55:45.597288  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetMachineName
	I1206 19:55:45.597658  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetIP
	I1206 19:55:45.600582  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.600954  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:45.600983  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.601138  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHHostname
	I1206 19:55:45.603355  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.603746  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:45.603774  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.603942  115591 provision.go:138] copyHostCerts
	I1206 19:55:45.604012  115591 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem, removing ...
	I1206 19:55:45.604037  115591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem
	I1206 19:55:45.604113  115591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem (1082 bytes)
	I1206 19:55:45.604227  115591 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem, removing ...
	I1206 19:55:45.604243  115591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem
	I1206 19:55:45.604277  115591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem (1123 bytes)
	I1206 19:55:45.604353  115591 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem, removing ...
	I1206 19:55:45.604363  115591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem
	I1206 19:55:45.604390  115591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem (1679 bytes)
	I1206 19:55:45.604454  115591 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem org=jenkins.embed-certs-209025 san=[192.168.50.164 192.168.50.164 localhost 127.0.0.1 minikube embed-certs-209025]
	I1206 19:55:45.706944  115591 provision.go:172] copyRemoteCerts
	I1206 19:55:45.707028  115591 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 19:55:45.707069  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHHostname
	I1206 19:55:45.709985  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.710357  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:45.710398  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.710530  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHPort
	I1206 19:55:45.710738  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 19:55:45.710917  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHUsername
	I1206 19:55:45.711092  115591 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/embed-certs-209025/id_rsa Username:docker}
	I1206 19:55:45.807035  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 19:55:45.831480  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 19:55:45.855902  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1206 19:55:45.882797  115591 provision.go:86] duration metric: configureAuth took 285.508678ms
	I1206 19:55:45.882831  115591 buildroot.go:189] setting minikube options for container-runtime
	I1206 19:55:45.883074  115591 config.go:182] Loaded profile config "embed-certs-209025": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 19:55:45.883156  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHHostname
	I1206 19:55:45.886130  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.886576  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:45.886611  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.886825  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHPort
	I1206 19:55:45.887026  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 19:55:45.887198  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 19:55:45.887348  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHUsername
	I1206 19:55:45.887570  115591 main.go:141] libmachine: Using SSH client type: native
	I1206 19:55:45.887900  115591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.164 22 <nil> <nil>}
	I1206 19:55:45.887926  115591 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 19:55:46.217654  115591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 19:55:46.217732  115591 machine.go:91] provisioned docker machine in 903.869734ms
	I1206 19:55:46.217748  115591 start.go:300] post-start starting for "embed-certs-209025" (driver="kvm2")
	I1206 19:55:46.217762  115591 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 19:55:46.217788  115591 main.go:141] libmachine: (embed-certs-209025) Calling .DriverName
	I1206 19:55:46.218154  115591 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 19:55:46.218190  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHHostname
	I1206 19:55:46.220968  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:46.221345  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:46.221378  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:46.221557  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHPort
	I1206 19:55:46.221781  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 19:55:46.221951  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHUsername
	I1206 19:55:46.222093  115591 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/embed-certs-209025/id_rsa Username:docker}
	I1206 19:55:46.316289  115591 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 19:55:46.321014  115591 info.go:137] Remote host: Buildroot 2021.02.12
	I1206 19:55:46.321046  115591 filesync.go:126] Scanning /home/jenkins/minikube-integration/17740-63652/.minikube/addons for local assets ...
	I1206 19:55:46.321108  115591 filesync.go:126] Scanning /home/jenkins/minikube-integration/17740-63652/.minikube/files for local assets ...
	I1206 19:55:46.321183  115591 filesync.go:149] local asset: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem -> 708342.pem in /etc/ssl/certs
	I1206 19:55:46.321304  115591 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 19:55:46.331967  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem --> /etc/ssl/certs/708342.pem (1708 bytes)
	I1206 19:55:46.358983  115591 start.go:303] post-start completed in 141.214825ms
	I1206 19:55:46.359014  115591 fix.go:56] fixHost completed within 22.668688221s
	I1206 19:55:46.359037  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHHostname
	I1206 19:55:46.361846  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:46.362179  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:46.362212  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:46.362452  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHPort
	I1206 19:55:46.362704  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 19:55:46.362897  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 19:55:46.363073  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHUsername
	I1206 19:55:46.363310  115591 main.go:141] libmachine: Using SSH client type: native
	I1206 19:55:46.363803  115591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.164 22 <nil> <nil>}
	I1206 19:55:46.363823  115591 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1206 19:55:46.486321  115591 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701892546.422221924
	
	I1206 19:55:46.486350  115591 fix.go:206] guest clock: 1701892546.422221924
	I1206 19:55:46.486361  115591 fix.go:219] Guest: 2023-12-06 19:55:46.422221924 +0000 UTC Remote: 2023-12-06 19:55:46.359018 +0000 UTC m=+296.897065855 (delta=63.203924ms)
	I1206 19:55:46.486385  115591 fix.go:190] guest clock delta is within tolerance: 63.203924ms
	I1206 19:55:46.486391  115591 start.go:83] releasing machines lock for "embed-certs-209025", held for 22.796102432s
	I1206 19:55:46.486420  115591 main.go:141] libmachine: (embed-certs-209025) Calling .DriverName
	I1206 19:55:46.486727  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetIP
	I1206 19:55:46.489589  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:46.489890  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:46.489922  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:46.490079  115591 main.go:141] libmachine: (embed-certs-209025) Calling .DriverName
	I1206 19:55:46.490643  115591 main.go:141] libmachine: (embed-certs-209025) Calling .DriverName
	I1206 19:55:46.490836  115591 main.go:141] libmachine: (embed-certs-209025) Calling .DriverName
	I1206 19:55:46.490924  115591 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 19:55:46.490974  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHHostname
	I1206 19:55:46.491257  115591 ssh_runner.go:195] Run: cat /version.json
	I1206 19:55:46.491281  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHHostname
	I1206 19:55:46.494034  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:46.494326  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:46.494379  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:46.494405  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:46.494704  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:46.494704  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHPort
	I1206 19:55:46.494748  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:46.494900  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 19:55:46.494958  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHPort
	I1206 19:55:46.495019  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHUsername
	I1206 19:55:46.495144  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 19:55:46.495137  115591 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/embed-certs-209025/id_rsa Username:docker}
	I1206 19:55:46.495269  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHUsername
	I1206 19:55:46.495397  115591 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/embed-certs-209025/id_rsa Username:docker}
	I1206 19:55:46.587575  115591 ssh_runner.go:195] Run: systemctl --version
	I1206 19:55:46.614901  115591 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 19:55:46.764133  115591 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 19:55:46.771049  115591 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 19:55:46.771133  115591 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 19:55:46.786157  115591 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1206 19:55:46.786187  115591 start.go:475] detecting cgroup driver to use...
	I1206 19:55:46.786262  115591 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 19:55:46.801158  115591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 19:55:46.812881  115591 docker.go:203] disabling cri-docker service (if available) ...
	I1206 19:55:46.812948  115591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 19:55:46.825139  115591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 19:55:46.838071  115591 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 19:55:46.949823  115591 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 19:55:47.080490  115591 docker.go:219] disabling docker service ...
	I1206 19:55:47.080572  115591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 19:55:47.094773  115591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 19:55:47.107963  115591 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 19:55:47.233536  115591 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 19:55:47.360425  115591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 19:55:47.377453  115591 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 19:55:47.395959  115591 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1206 19:55:47.396026  115591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:55:47.406599  115591 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 19:55:47.406696  115591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:55:47.417082  115591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:55:47.427463  115591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:55:47.438246  115591 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 19:55:47.449910  115591 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 19:55:47.459620  115591 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1206 19:55:47.459675  115591 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1206 19:55:47.476230  115591 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 19:55:47.486777  115591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 19:55:47.597395  115591 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 19:55:47.809260  115591 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 19:55:47.809348  115591 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 19:55:47.815968  115591 start.go:543] Will wait 60s for crictl version
	I1206 19:55:47.816035  115591 ssh_runner.go:195] Run: which crictl
	I1206 19:55:47.820214  115591 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1206 19:55:47.869345  115591 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1206 19:55:47.869435  115591 ssh_runner.go:195] Run: crio --version
	I1206 19:55:47.923602  115591 ssh_runner.go:195] Run: crio --version
	I1206 19:55:47.983187  115591 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1206 19:55:45.963265  115217 retry.go:31] will retry after 4.496095904s: kubelet not initialised
	I1206 19:55:47.984954  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetIP
	I1206 19:55:47.988218  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:47.988742  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:47.988775  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:47.989031  115591 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1206 19:55:47.994471  115591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 19:55:48.008964  115591 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1206 19:55:48.009022  115591 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 19:55:48.056234  115591 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1206 19:55:48.056333  115591 ssh_runner.go:195] Run: which lz4
	I1206 19:55:48.061573  115591 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1206 19:55:48.066119  115591 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1206 19:55:48.066156  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1206 19:55:46.513897  115078 main.go:141] libmachine: (no-preload-989559) Calling .Start
	I1206 19:55:46.514072  115078 main.go:141] libmachine: (no-preload-989559) Ensuring networks are active...
	I1206 19:55:46.514830  115078 main.go:141] libmachine: (no-preload-989559) Ensuring network default is active
	I1206 19:55:46.515153  115078 main.go:141] libmachine: (no-preload-989559) Ensuring network mk-no-preload-989559 is active
	I1206 19:55:46.515533  115078 main.go:141] libmachine: (no-preload-989559) Getting domain xml...
	I1206 19:55:46.516251  115078 main.go:141] libmachine: (no-preload-989559) Creating domain...
	I1206 19:55:47.899847  115078 main.go:141] libmachine: (no-preload-989559) Waiting to get IP...
	I1206 19:55:47.900939  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:55:47.901513  115078 main.go:141] libmachine: (no-preload-989559) DBG | unable to find current IP address of domain no-preload-989559 in network mk-no-preload-989559
	I1206 19:55:47.901634  115078 main.go:141] libmachine: (no-preload-989559) DBG | I1206 19:55:47.901487  117094 retry.go:31] will retry after 244.343929ms: waiting for machine to come up
	I1206 19:55:48.148254  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:55:48.148888  115078 main.go:141] libmachine: (no-preload-989559) DBG | unable to find current IP address of domain no-preload-989559 in network mk-no-preload-989559
	I1206 19:55:48.148927  115078 main.go:141] libmachine: (no-preload-989559) DBG | I1206 19:55:48.148835  117094 retry.go:31] will retry after 258.755356ms: waiting for machine to come up
	I1206 19:55:48.409550  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:55:48.410401  115078 main.go:141] libmachine: (no-preload-989559) DBG | unable to find current IP address of domain no-preload-989559 in network mk-no-preload-989559
	I1206 19:55:48.410427  115078 main.go:141] libmachine: (no-preload-989559) DBG | I1206 19:55:48.410308  117094 retry.go:31] will retry after 321.790541ms: waiting for machine to come up
	I1206 19:55:48.734055  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:55:48.734744  115078 main.go:141] libmachine: (no-preload-989559) DBG | unable to find current IP address of domain no-preload-989559 in network mk-no-preload-989559
	I1206 19:55:48.734768  115078 main.go:141] libmachine: (no-preload-989559) DBG | I1206 19:55:48.734646  117094 retry.go:31] will retry after 464.789653ms: waiting for machine to come up
	I1206 19:55:49.201462  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:55:49.202032  115078 main.go:141] libmachine: (no-preload-989559) DBG | unable to find current IP address of domain no-preload-989559 in network mk-no-preload-989559
	I1206 19:55:49.202065  115078 main.go:141] libmachine: (no-preload-989559) DBG | I1206 19:55:49.201985  117094 retry.go:31] will retry after 541.238407ms: waiting for machine to come up
	I1206 19:55:49.744792  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:55:49.745432  115078 main.go:141] libmachine: (no-preload-989559) DBG | unable to find current IP address of domain no-preload-989559 in network mk-no-preload-989559
	I1206 19:55:49.745461  115078 main.go:141] libmachine: (no-preload-989559) DBG | I1206 19:55:49.745338  117094 retry.go:31] will retry after 791.407194ms: waiting for machine to come up
	I1206 19:55:50.538151  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:55:50.538857  115078 main.go:141] libmachine: (no-preload-989559) DBG | unable to find current IP address of domain no-preload-989559 in network mk-no-preload-989559
	I1206 19:55:50.538883  115078 main.go:141] libmachine: (no-preload-989559) DBG | I1206 19:55:50.538741  117094 retry.go:31] will retry after 1.11510814s: waiting for machine to come up
	I1206 19:55:49.730248  115497 api_server.go:279] https://192.168.72.22:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1206 19:55:49.730287  115497 api_server.go:103] status: https://192.168.72.22:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1206 19:55:49.730318  115497 api_server.go:253] Checking apiserver healthz at https://192.168.72.22:8444/healthz ...
	I1206 19:55:49.788747  115497 api_server.go:279] https://192.168.72.22:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1206 19:55:49.788796  115497 api_server.go:103] status: https://192.168.72.22:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1206 19:55:50.289144  115497 api_server.go:253] Checking apiserver healthz at https://192.168.72.22:8444/healthz ...
	I1206 19:55:50.301437  115497 api_server.go:279] https://192.168.72.22:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1206 19:55:50.301479  115497 api_server.go:103] status: https://192.168.72.22:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1206 19:55:50.789018  115497 api_server.go:253] Checking apiserver healthz at https://192.168.72.22:8444/healthz ...
	I1206 19:55:50.800325  115497 api_server.go:279] https://192.168.72.22:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1206 19:55:50.800374  115497 api_server.go:103] status: https://192.168.72.22:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1206 19:55:51.289899  115497 api_server.go:253] Checking apiserver healthz at https://192.168.72.22:8444/healthz ...
	I1206 19:55:51.297638  115497 api_server.go:279] https://192.168.72.22:8444/healthz returned 200:
	ok
	I1206 19:55:51.310738  115497 api_server.go:141] control plane version: v1.28.4
	I1206 19:55:51.310772  115497 api_server.go:131] duration metric: took 5.946796569s to wait for apiserver health ...
	I1206 19:55:51.310784  115497 cni.go:84] Creating CNI manager for ""
	I1206 19:55:51.310793  115497 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 19:55:51.312719  115497 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1206 19:55:51.314431  115497 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1206 19:55:51.335045  115497 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1206 19:55:51.365598  115497 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 19:55:51.381865  115497 system_pods.go:59] 8 kube-system pods found
	I1206 19:55:51.381914  115497 system_pods.go:61] "coredns-5dd5756b68-4rgxf" [2ae6daa5-430f-4f14-a40c-c29f4757fb06] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 19:55:51.381936  115497 system_pods.go:61] "etcd-default-k8s-diff-port-380424" [895b0cdf-86c9-4b14-a633-4b172471cd2c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1206 19:55:51.381947  115497 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-380424" [ccc042d4-cd4c-4769-adc6-99d792146d72] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1206 19:55:51.381963  115497 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-380424" [b3fbba6f-fa71-489e-81b0-0196bb019273] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 19:55:51.381972  115497 system_pods.go:61] "kube-proxy-9ftnp" [4389fff8-1b22-47a5-af97-35a4e5b6c2b3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1206 19:55:51.381981  115497 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-380424" [b53c666c-cc84-4dd3-b208-35d04113381c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1206 19:55:51.381997  115497 system_pods.go:61] "metrics-server-57f55c9bc5-7bblg" [3a6477d9-cb91-48cb-ba03-8b669db53841] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 19:55:51.382006  115497 system_pods.go:61] "storage-provisioner" [b8f06027-e37c-4c09-b361-4d70af65c991] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 19:55:51.382020  115497 system_pods.go:74] duration metric: took 16.393796ms to wait for pod list to return data ...
	I1206 19:55:51.382041  115497 node_conditions.go:102] verifying NodePressure condition ...
	I1206 19:55:51.389181  115497 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1206 19:55:51.389242  115497 node_conditions.go:123] node cpu capacity is 2
	I1206 19:55:51.389256  115497 node_conditions.go:105] duration metric: took 7.208817ms to run NodePressure ...
	I1206 19:55:51.389285  115497 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:55:50.466490  115217 retry.go:31] will retry after 11.434043258s: kubelet not initialised
	I1206 19:55:49.900059  115591 crio.go:444] Took 1.838540 seconds to copy over tarball
	I1206 19:55:49.900171  115591 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1206 19:55:53.471724  115591 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.571512743s)
	I1206 19:55:53.471757  115591 crio.go:451] Took 3.571659 seconds to extract the tarball
	I1206 19:55:53.471770  115591 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1206 19:55:53.522151  115591 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 19:55:53.578068  115591 crio.go:496] all images are preloaded for cri-o runtime.
	I1206 19:55:53.578167  115591 cache_images.go:84] Images are preloaded, skipping loading
	I1206 19:55:53.578285  115591 ssh_runner.go:195] Run: crio config
	I1206 19:55:53.650688  115591 cni.go:84] Creating CNI manager for ""
	I1206 19:55:53.650715  115591 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 19:55:53.650736  115591 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1206 19:55:53.650762  115591 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.164 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-209025 NodeName:embed-certs-209025 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.164"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.164 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 19:55:53.650938  115591 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.164
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-209025"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.164
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.164"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 19:55:53.651025  115591 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-209025 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.164
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-209025 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1206 19:55:53.651093  115591 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1206 19:55:53.663792  115591 binaries.go:44] Found k8s binaries, skipping transfer
	I1206 19:55:53.663869  115591 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 19:55:53.674126  115591 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1206 19:55:53.692175  115591 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 19:55:53.708842  115591 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1206 19:55:53.726141  115591 ssh_runner.go:195] Run: grep 192.168.50.164	control-plane.minikube.internal$ /etc/hosts
	I1206 19:55:53.730310  115591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.164	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 19:55:53.742456  115591 certs.go:56] Setting up /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/embed-certs-209025 for IP: 192.168.50.164
	I1206 19:55:53.742489  115591 certs.go:190] acquiring lock for shared ca certs: {Name:mkf8fbf7b590617ef4dc6c3a4acb742ae26f89ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 19:55:53.742712  115591 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.key
	I1206 19:55:53.742765  115591 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.key
	I1206 19:55:53.742841  115591 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/embed-certs-209025/client.key
	I1206 19:55:53.742898  115591 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/embed-certs-209025/apiserver.key.d84b90a2
	I1206 19:55:53.742941  115591 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/embed-certs-209025/proxy-client.key
	I1206 19:55:53.743053  115591 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834.pem (1338 bytes)
	W1206 19:55:53.743081  115591 certs.go:433] ignoring /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834_empty.pem, impossibly tiny 0 bytes
	I1206 19:55:53.743096  115591 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 19:55:53.743135  115591 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem (1082 bytes)
	I1206 19:55:53.743172  115591 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem (1123 bytes)
	I1206 19:55:53.743205  115591 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem (1679 bytes)
	I1206 19:55:53.743265  115591 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem (1708 bytes)
	I1206 19:55:53.743932  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/embed-certs-209025/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1206 19:55:53.770792  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/embed-certs-209025/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 19:55:53.795080  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/embed-certs-209025/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 19:55:53.820920  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/embed-certs-209025/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 19:55:53.849068  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 19:55:53.875210  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 19:55:53.900201  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 19:55:53.927067  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 19:55:53.952810  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 19:55:53.979374  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834.pem --> /usr/share/ca-certificates/70834.pem (1338 bytes)
	I1206 19:55:54.005013  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem --> /usr/share/ca-certificates/708342.pem (1708 bytes)
	I1206 19:55:54.028072  115591 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 19:55:54.047087  115591 ssh_runner.go:195] Run: openssl version
	I1206 19:55:54.052949  115591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/708342.pem && ln -fs /usr/share/ca-certificates/708342.pem /etc/ssl/certs/708342.pem"
	I1206 19:55:54.064662  115591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/708342.pem
	I1206 19:55:54.069695  115591 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  6 18:50 /usr/share/ca-certificates/708342.pem
	I1206 19:55:54.069767  115591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/708342.pem
	I1206 19:55:54.076520  115591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/708342.pem /etc/ssl/certs/3ec20f2e.0"
	I1206 19:55:54.088312  115591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1206 19:55:54.100303  115591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:55:54.105718  115591 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  6 18:41 /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:55:54.105787  115591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:55:54.111543  115591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1206 19:55:54.124094  115591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/70834.pem && ln -fs /usr/share/ca-certificates/70834.pem /etc/ssl/certs/70834.pem"
	I1206 19:55:54.137418  115591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/70834.pem
	I1206 19:55:54.142536  115591 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  6 18:50 /usr/share/ca-certificates/70834.pem
	I1206 19:55:54.142611  115591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/70834.pem
	I1206 19:55:54.148497  115591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/70834.pem /etc/ssl/certs/51391683.0"
	I1206 19:55:54.160909  115591 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1206 19:55:54.165739  115591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 19:55:54.171884  115591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 19:55:54.179765  115591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 19:55:54.187615  115591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 19:55:54.195156  115591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 19:55:54.203228  115591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 19:55:54.210119  115591 kubeadm.go:404] StartCluster: {Name:embed-certs-209025 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-209025 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.164 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 19:55:54.210251  115591 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 19:55:54.210308  115591 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 19:55:54.258252  115591 cri.go:89] found id: ""
	I1206 19:55:54.258347  115591 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 19:55:54.270699  115591 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1206 19:55:54.270724  115591 kubeadm.go:636] restartCluster start
	I1206 19:55:54.270785  115591 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 19:55:54.281833  115591 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:54.282964  115591 kubeconfig.go:92] found "embed-certs-209025" server: "https://192.168.50.164:8443"
	I1206 19:55:54.285394  115591 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 19:55:54.296437  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:55:54.296545  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:54.309685  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:54.309707  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:55:54.309774  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:54.322265  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:51.655238  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:55:51.655732  115078 main.go:141] libmachine: (no-preload-989559) DBG | unable to find current IP address of domain no-preload-989559 in network mk-no-preload-989559
	I1206 19:55:51.655776  115078 main.go:141] libmachine: (no-preload-989559) DBG | I1206 19:55:51.655642  117094 retry.go:31] will retry after 958.384892ms: waiting for machine to come up
	I1206 19:55:52.616005  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:55:52.616540  115078 main.go:141] libmachine: (no-preload-989559) DBG | unable to find current IP address of domain no-preload-989559 in network mk-no-preload-989559
	I1206 19:55:52.616583  115078 main.go:141] libmachine: (no-preload-989559) DBG | I1206 19:55:52.616471  117094 retry.go:31] will retry after 1.537571193s: waiting for machine to come up
	I1206 19:55:54.155949  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:55:54.156397  115078 main.go:141] libmachine: (no-preload-989559) DBG | unable to find current IP address of domain no-preload-989559 in network mk-no-preload-989559
	I1206 19:55:54.156429  115078 main.go:141] libmachine: (no-preload-989559) DBG | I1206 19:55:54.156344  117094 retry.go:31] will retry after 2.030397746s: waiting for machine to come up
	I1206 19:55:51.771991  115497 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1206 19:55:51.786960  115497 kubeadm.go:787] kubelet initialised
	I1206 19:55:51.787056  115497 kubeadm.go:788] duration metric: took 14.962005ms waiting for restarted kubelet to initialise ...
	I1206 19:55:51.787080  115497 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 19:55:51.799090  115497 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-4rgxf" in "kube-system" namespace to be "Ready" ...
	I1206 19:55:53.845695  115497 pod_ready.go:102] pod "coredns-5dd5756b68-4rgxf" in "kube-system" namespace has status "Ready":"False"
	I1206 19:55:55.850483  115497 pod_ready.go:102] pod "coredns-5dd5756b68-4rgxf" in "kube-system" namespace has status "Ready":"False"
	I1206 19:55:54.823014  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:55:54.823105  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:54.835793  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:55.323393  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:55:55.323491  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:55.337041  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:55.823330  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:55:55.823437  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:55.839489  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:56.323250  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:55:56.323356  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:56.340029  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:56.822585  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:55:56.822700  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:56.835752  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:57.323326  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:55:57.323413  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:57.339916  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:57.823386  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:55:57.823478  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:57.840112  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:58.322441  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:55:58.322557  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:58.335485  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:58.822575  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:55:58.822695  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:58.839495  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:59.323053  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:55:59.323129  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:59.336117  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:56.188549  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:55:56.189073  115078 main.go:141] libmachine: (no-preload-989559) DBG | unable to find current IP address of domain no-preload-989559 in network mk-no-preload-989559
	I1206 19:55:56.189105  115078 main.go:141] libmachine: (no-preload-989559) DBG | I1206 19:55:56.189026  117094 retry.go:31] will retry after 2.455387871s: waiting for machine to come up
	I1206 19:55:58.646361  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:55:58.646772  115078 main.go:141] libmachine: (no-preload-989559) DBG | unable to find current IP address of domain no-preload-989559 in network mk-no-preload-989559
	I1206 19:55:58.646804  115078 main.go:141] libmachine: (no-preload-989559) DBG | I1206 19:55:58.646710  117094 retry.go:31] will retry after 3.286246406s: waiting for machine to come up
	I1206 19:55:57.344443  115497 pod_ready.go:92] pod "coredns-5dd5756b68-4rgxf" in "kube-system" namespace has status "Ready":"True"
	I1206 19:55:57.344478  115497 pod_ready.go:81] duration metric: took 5.545343389s waiting for pod "coredns-5dd5756b68-4rgxf" in "kube-system" namespace to be "Ready" ...
	I1206 19:55:57.344492  115497 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 19:55:59.363596  115497 pod_ready.go:102] pod "etcd-default-k8s-diff-port-380424" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:01.363703  115497 pod_ready.go:102] pod "etcd-default-k8s-diff-port-380424" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:01.907869  115217 retry.go:31] will retry after 21.572905296s: kubelet not initialised
	I1206 19:55:59.823000  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:55:59.823148  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:59.836153  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:00.322534  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:56:00.322617  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:00.340369  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:00.822851  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:56:00.822947  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:00.836512  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:01.323083  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:56:01.323161  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:01.337092  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:01.822623  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:56:01.822761  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:01.836428  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:02.323125  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:56:02.323213  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:02.336617  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:02.823198  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:56:02.823287  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:02.835923  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:03.322426  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:56:03.322527  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:03.336495  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:03.822711  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:56:03.822803  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:03.836624  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:04.297216  115591 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1206 19:56:04.297278  115591 kubeadm.go:1135] stopping kube-system containers ...
	I1206 19:56:04.297295  115591 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1206 19:56:04.297393  115591 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 19:56:04.343930  115591 cri.go:89] found id: ""
	I1206 19:56:04.344015  115591 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1206 19:56:04.364785  115591 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 19:56:04.376251  115591 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 19:56:04.376320  115591 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 19:56:04.387749  115591 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1206 19:56:04.387779  115591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:56:04.511034  115591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:56:01.934204  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:01.934775  115078 main.go:141] libmachine: (no-preload-989559) DBG | unable to find current IP address of domain no-preload-989559 in network mk-no-preload-989559
	I1206 19:56:01.934798  115078 main.go:141] libmachine: (no-preload-989559) DBG | I1206 19:56:01.934724  117094 retry.go:31] will retry after 2.967009815s: waiting for machine to come up
	I1206 19:56:04.903296  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:04.903725  115078 main.go:141] libmachine: (no-preload-989559) DBG | unable to find current IP address of domain no-preload-989559 in network mk-no-preload-989559
	I1206 19:56:04.903747  115078 main.go:141] libmachine: (no-preload-989559) DBG | I1206 19:56:04.903692  117094 retry.go:31] will retry after 4.817836653s: waiting for machine to come up
	I1206 19:56:03.862804  115497 pod_ready.go:102] pod "etcd-default-k8s-diff-port-380424" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:04.373174  115497 pod_ready.go:92] pod "etcd-default-k8s-diff-port-380424" in "kube-system" namespace has status "Ready":"True"
	I1206 19:56:04.373209  115497 pod_ready.go:81] duration metric: took 7.028708302s waiting for pod "etcd-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:04.373222  115497 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:04.383300  115497 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-380424" in "kube-system" namespace has status "Ready":"True"
	I1206 19:56:04.383324  115497 pod_ready.go:81] duration metric: took 10.094356ms waiting for pod "kube-apiserver-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:04.383333  115497 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:04.390225  115497 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-380424" in "kube-system" namespace has status "Ready":"True"
	I1206 19:56:04.390254  115497 pod_ready.go:81] duration metric: took 6.909695ms waiting for pod "kube-controller-manager-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:04.390267  115497 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9ftnp" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:04.396713  115497 pod_ready.go:92] pod "kube-proxy-9ftnp" in "kube-system" namespace has status "Ready":"True"
	I1206 19:56:04.396753  115497 pod_ready.go:81] duration metric: took 6.477432ms waiting for pod "kube-proxy-9ftnp" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:04.396766  115497 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:04.407015  115497 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-380424" in "kube-system" namespace has status "Ready":"True"
	I1206 19:56:04.407042  115497 pod_ready.go:81] duration metric: took 10.266604ms waiting for pod "kube-scheduler-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:04.407056  115497 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:05.819075  115591 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.307992443s)
	I1206 19:56:05.819111  115591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:56:06.024824  115591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:56:06.120865  115591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:56:06.207869  115591 api_server.go:52] waiting for apiserver process to appear ...
	I1206 19:56:06.207959  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:06.221306  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:06.734164  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:07.234302  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:07.734130  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:08.233726  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:08.734073  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:08.762848  115591 api_server.go:72] duration metric: took 2.554978073s to wait for apiserver process to appear ...
	I1206 19:56:08.762881  115591 api_server.go:88] waiting for apiserver healthz status ...
	I1206 19:56:08.762903  115591 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8443/healthz ...
	I1206 19:56:09.723600  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:09.724078  115078 main.go:141] libmachine: (no-preload-989559) Found IP for machine: 192.168.39.5
	I1206 19:56:09.724107  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has current primary IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:09.724114  115078 main.go:141] libmachine: (no-preload-989559) Reserving static IP address...
	I1206 19:56:09.724466  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "no-preload-989559", mac: "52:54:00:1c:4b:ce", ip: "192.168.39.5"} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:09.724509  115078 main.go:141] libmachine: (no-preload-989559) DBG | skip adding static IP to network mk-no-preload-989559 - found existing host DHCP lease matching {name: "no-preload-989559", mac: "52:54:00:1c:4b:ce", ip: "192.168.39.5"}
	I1206 19:56:09.724526  115078 main.go:141] libmachine: (no-preload-989559) Reserved static IP address: 192.168.39.5
	I1206 19:56:09.724536  115078 main.go:141] libmachine: (no-preload-989559) Waiting for SSH to be available...
	I1206 19:56:09.724546  115078 main.go:141] libmachine: (no-preload-989559) DBG | Getting to WaitForSSH function...
	I1206 19:56:09.726831  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:09.727117  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:09.727149  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:09.727248  115078 main.go:141] libmachine: (no-preload-989559) DBG | Using SSH client type: external
	I1206 19:56:09.727277  115078 main.go:141] libmachine: (no-preload-989559) DBG | Using SSH private key: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/no-preload-989559/id_rsa (-rw-------)
	I1206 19:56:09.727306  115078 main.go:141] libmachine: (no-preload-989559) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.5 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17740-63652/.minikube/machines/no-preload-989559/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1206 19:56:09.727317  115078 main.go:141] libmachine: (no-preload-989559) DBG | About to run SSH command:
	I1206 19:56:09.727361  115078 main.go:141] libmachine: (no-preload-989559) DBG | exit 0
	I1206 19:56:09.866040  115078 main.go:141] libmachine: (no-preload-989559) DBG | SSH cmd err, output: <nil>: 
	I1206 19:56:09.866443  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetConfigRaw
	I1206 19:56:09.867193  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetIP
	I1206 19:56:09.869892  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:09.870335  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:09.870374  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:09.870612  115078 profile.go:148] Saving config to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/no-preload-989559/config.json ...
	I1206 19:56:09.870870  115078 machine.go:88] provisioning docker machine ...
	I1206 19:56:09.870895  115078 main.go:141] libmachine: (no-preload-989559) Calling .DriverName
	I1206 19:56:09.871120  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetMachineName
	I1206 19:56:09.871299  115078 buildroot.go:166] provisioning hostname "no-preload-989559"
	I1206 19:56:09.871320  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetMachineName
	I1206 19:56:09.871471  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHHostname
	I1206 19:56:09.874146  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:09.874514  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:09.874554  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:09.874741  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHPort
	I1206 19:56:09.874943  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:09.875114  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:09.875258  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHUsername
	I1206 19:56:09.875412  115078 main.go:141] libmachine: Using SSH client type: native
	I1206 19:56:09.875921  115078 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I1206 19:56:09.875942  115078 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-989559 && echo "no-preload-989559" | sudo tee /etc/hostname
	I1206 19:56:10.017205  115078 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-989559
	
	I1206 19:56:10.017259  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHHostname
	I1206 19:56:10.020397  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:10.020843  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:10.020873  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:10.021040  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHPort
	I1206 19:56:10.021287  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:10.021450  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:10.021578  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHUsername
	I1206 19:56:10.021773  115078 main.go:141] libmachine: Using SSH client type: native
	I1206 19:56:10.022227  115078 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I1206 19:56:10.022255  115078 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-989559' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-989559/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-989559' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 19:56:10.160934  115078 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1206 19:56:10.161020  115078 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17740-63652/.minikube CaCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17740-63652/.minikube}
	I1206 19:56:10.161056  115078 buildroot.go:174] setting up certificates
	I1206 19:56:10.161072  115078 provision.go:83] configureAuth start
	I1206 19:56:10.161086  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetMachineName
	I1206 19:56:10.161464  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetIP
	I1206 19:56:10.164558  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:10.164956  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:10.165007  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:10.165246  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHHostname
	I1206 19:56:10.167911  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:10.168352  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:10.168412  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:10.168529  115078 provision.go:138] copyHostCerts
	I1206 19:56:10.168589  115078 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem, removing ...
	I1206 19:56:10.168612  115078 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem
	I1206 19:56:10.168675  115078 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem (1082 bytes)
	I1206 19:56:10.168796  115078 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem, removing ...
	I1206 19:56:10.168811  115078 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem
	I1206 19:56:10.168844  115078 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem (1123 bytes)
	I1206 19:56:10.168923  115078 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem, removing ...
	I1206 19:56:10.168962  115078 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem
	I1206 19:56:10.168990  115078 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem (1679 bytes)
	I1206 19:56:10.169062  115078 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem org=jenkins.no-preload-989559 san=[192.168.39.5 192.168.39.5 localhost 127.0.0.1 minikube no-preload-989559]
	I1206 19:56:10.266595  115078 provision.go:172] copyRemoteCerts
	I1206 19:56:10.266665  115078 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 19:56:10.266693  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHHostname
	I1206 19:56:10.269388  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:10.269786  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:10.269813  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:10.269987  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHPort
	I1206 19:56:10.270226  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:10.270390  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHUsername
	I1206 19:56:10.270536  115078 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/no-preload-989559/id_rsa Username:docker}
	I1206 19:56:10.362922  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 19:56:10.388165  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1206 19:56:10.412473  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 19:56:10.436804  115078 provision.go:86] duration metric: configureAuth took 275.714086ms
	I1206 19:56:10.436840  115078 buildroot.go:189] setting minikube options for container-runtime
	I1206 19:56:10.437076  115078 config.go:182] Loaded profile config "no-preload-989559": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.1
	I1206 19:56:10.437156  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHHostname
	I1206 19:56:10.439999  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:10.440419  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:10.440461  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:10.440567  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHPort
	I1206 19:56:10.440813  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:10.441006  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:10.441213  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHUsername
	I1206 19:56:10.441393  115078 main.go:141] libmachine: Using SSH client type: native
	I1206 19:56:10.441827  115078 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I1206 19:56:10.441844  115078 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 19:56:10.766695  115078 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 19:56:10.766726  115078 machine.go:91] provisioned docker machine in 895.840237ms
	I1206 19:56:10.766739  115078 start.go:300] post-start starting for "no-preload-989559" (driver="kvm2")
	I1206 19:56:10.766759  115078 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 19:56:10.766780  115078 main.go:141] libmachine: (no-preload-989559) Calling .DriverName
	I1206 19:56:10.767134  115078 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 19:56:10.767175  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHHostname
	I1206 19:56:10.770309  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:10.770704  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:10.770733  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:10.770881  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHPort
	I1206 19:56:10.771110  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:10.771247  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHUsername
	I1206 19:56:10.771414  115078 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/no-preload-989559/id_rsa Username:docker}
	I1206 19:56:10.869486  115078 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 19:56:10.874406  115078 info.go:137] Remote host: Buildroot 2021.02.12
	I1206 19:56:10.874433  115078 filesync.go:126] Scanning /home/jenkins/minikube-integration/17740-63652/.minikube/addons for local assets ...
	I1206 19:56:10.874502  115078 filesync.go:126] Scanning /home/jenkins/minikube-integration/17740-63652/.minikube/files for local assets ...
	I1206 19:56:10.874584  115078 filesync.go:149] local asset: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem -> 708342.pem in /etc/ssl/certs
	I1206 19:56:10.874684  115078 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 19:56:10.885837  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem --> /etc/ssl/certs/708342.pem (1708 bytes)
	I1206 19:56:10.910379  115078 start.go:303] post-start completed in 143.622076ms
	I1206 19:56:10.910406  115078 fix.go:56] fixHost completed within 24.423837205s
	I1206 19:56:10.910430  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHHostname
	I1206 19:56:10.913414  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:10.913887  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:10.913924  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:10.914062  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHPort
	I1206 19:56:10.914276  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:10.914430  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:10.914575  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHUsername
	I1206 19:56:10.914741  115078 main.go:141] libmachine: Using SSH client type: native
	I1206 19:56:10.915078  115078 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I1206 19:56:10.915096  115078 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1206 19:56:06.672320  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:09.170277  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:11.173418  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:11.046393  115078 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701892571.030057611
	
	I1206 19:56:11.046418  115078 fix.go:206] guest clock: 1701892571.030057611
	I1206 19:56:11.046427  115078 fix.go:219] Guest: 2023-12-06 19:56:11.030057611 +0000 UTC Remote: 2023-12-06 19:56:10.910410702 +0000 UTC m=+364.968252500 (delta=119.646909ms)
	I1206 19:56:11.046452  115078 fix.go:190] guest clock delta is within tolerance: 119.646909ms
	I1206 19:56:11.046460  115078 start.go:83] releasing machines lock for "no-preload-989559", held for 24.559924375s
	I1206 19:56:11.046485  115078 main.go:141] libmachine: (no-preload-989559) Calling .DriverName
	I1206 19:56:11.046751  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetIP
	I1206 19:56:11.049522  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:11.049918  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:11.049958  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:11.050160  115078 main.go:141] libmachine: (no-preload-989559) Calling .DriverName
	I1206 19:56:11.050715  115078 main.go:141] libmachine: (no-preload-989559) Calling .DriverName
	I1206 19:56:11.050932  115078 main.go:141] libmachine: (no-preload-989559) Calling .DriverName
	I1206 19:56:11.051010  115078 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 19:56:11.051063  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHHostname
	I1206 19:56:11.051201  115078 ssh_runner.go:195] Run: cat /version.json
	I1206 19:56:11.051234  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHHostname
	I1206 19:56:11.054142  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:11.054342  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:11.054556  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:11.054587  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:11.054711  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHPort
	I1206 19:56:11.054925  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:11.054930  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:11.054950  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:11.055054  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHPort
	I1206 19:56:11.055147  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHUsername
	I1206 19:56:11.055316  115078 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/no-preload-989559/id_rsa Username:docker}
	I1206 19:56:11.055338  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:11.055483  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHUsername
	I1206 19:56:11.055605  115078 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/no-preload-989559/id_rsa Username:docker}
	I1206 19:56:11.180256  115078 ssh_runner.go:195] Run: systemctl --version
	I1206 19:56:11.186702  115078 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 19:56:11.339386  115078 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 19:56:11.346262  115078 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 19:56:11.346364  115078 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 19:56:11.362865  115078 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1206 19:56:11.362902  115078 start.go:475] detecting cgroup driver to use...
	I1206 19:56:11.362988  115078 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 19:56:11.383636  115078 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 19:56:11.397157  115078 docker.go:203] disabling cri-docker service (if available) ...
	I1206 19:56:11.397264  115078 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 19:56:11.411338  115078 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 19:56:11.425762  115078 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 19:56:11.560730  115078 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 19:56:11.708633  115078 docker.go:219] disabling docker service ...
	I1206 19:56:11.708713  115078 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 19:56:11.723172  115078 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 19:56:11.737032  115078 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 19:56:11.851037  115078 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 19:56:11.969321  115078 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 19:56:11.982745  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 19:56:12.003130  115078 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1206 19:56:12.003215  115078 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:56:12.013345  115078 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 19:56:12.013428  115078 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:56:12.023765  115078 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:56:12.034114  115078 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:56:12.044159  115078 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 19:56:12.054135  115078 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 19:56:12.062781  115078 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1206 19:56:12.062861  115078 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1206 19:56:12.076322  115078 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 19:56:12.085924  115078 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 19:56:12.216360  115078 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 19:56:12.409482  115078 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 19:56:12.409550  115078 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 19:56:12.417063  115078 start.go:543] Will wait 60s for crictl version
	I1206 19:56:12.417135  115078 ssh_runner.go:195] Run: which crictl
	I1206 19:56:12.422177  115078 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1206 19:56:12.474340  115078 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1206 19:56:12.474449  115078 ssh_runner.go:195] Run: crio --version
	I1206 19:56:12.538091  115078 ssh_runner.go:195] Run: crio --version
	I1206 19:56:12.604444  115078 out.go:177] * Preparing Kubernetes v1.29.0-rc.1 on CRI-O 1.24.1 ...
	I1206 19:56:12.144887  115591 api_server.go:279] https://192.168.50.164:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1206 19:56:12.144921  115591 api_server.go:103] status: https://192.168.50.164:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1206 19:56:12.144936  115591 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8443/healthz ...
	I1206 19:56:12.179318  115591 api_server.go:279] https://192.168.50.164:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1206 19:56:12.179366  115591 api_server.go:103] status: https://192.168.50.164:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1206 19:56:12.679803  115591 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8443/healthz ...
	I1206 19:56:12.694412  115591 api_server.go:279] https://192.168.50.164:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1206 19:56:12.694449  115591 api_server.go:103] status: https://192.168.50.164:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1206 19:56:13.179503  115591 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8443/healthz ...
	I1206 19:56:13.193118  115591 api_server.go:279] https://192.168.50.164:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1206 19:56:13.193161  115591 api_server.go:103] status: https://192.168.50.164:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1206 19:56:13.679759  115591 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8443/healthz ...
	I1206 19:56:13.685603  115591 api_server.go:279] https://192.168.50.164:8443/healthz returned 200:
	ok
	I1206 19:56:13.694792  115591 api_server.go:141] control plane version: v1.28.4
	I1206 19:56:13.694831  115591 api_server.go:131] duration metric: took 4.931941572s to wait for apiserver health ...
	I1206 19:56:13.694843  115591 cni.go:84] Creating CNI manager for ""
	I1206 19:56:13.694852  115591 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 19:56:13.697042  115591 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1206 19:56:13.698653  115591 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1206 19:56:13.712991  115591 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1206 19:56:13.734001  115591 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 19:56:13.761962  115591 system_pods.go:59] 8 kube-system pods found
	I1206 19:56:13.762001  115591 system_pods.go:61] "coredns-5dd5756b68-cpst4" [e7d8324e-8468-470c-b532-1f09ee805bab] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 19:56:13.762022  115591 system_pods.go:61] "etcd-embed-certs-209025" [eeb81149-8e43-4efe-b977-e8f84c7a7c57] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1206 19:56:13.762032  115591 system_pods.go:61] "kube-apiserver-embed-certs-209025" [b64e228d-4921-4e35-b80c-343f8519076e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1206 19:56:13.762041  115591 system_pods.go:61] "kube-controller-manager-embed-certs-209025" [2206d849-0724-42c9-b5c4-4d84c3cafce4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 19:56:13.762053  115591 system_pods.go:61] "kube-proxy-pt8nj" [b7cffe6a-4401-40e0-8056-68452e15b57c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1206 19:56:13.762068  115591 system_pods.go:61] "kube-scheduler-embed-certs-209025" [88ae7a94-a1bc-463a-9253-5f308ec1755e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1206 19:56:13.762077  115591 system_pods.go:61] "metrics-server-57f55c9bc5-dr9k8" [0dbe18a4-d30d-4882-b188-b0d1f1b1d711] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 19:56:13.762092  115591 system_pods.go:61] "storage-provisioner" [afebf144-9062-4b43-a491-9eecd5ab6c10] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 19:56:13.762109  115591 system_pods.go:74] duration metric: took 28.078588ms to wait for pod list to return data ...
	I1206 19:56:13.762120  115591 node_conditions.go:102] verifying NodePressure condition ...
	I1206 19:56:13.773614  115591 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1206 19:56:13.773646  115591 node_conditions.go:123] node cpu capacity is 2
	I1206 19:56:13.773657  115591 node_conditions.go:105] duration metric: took 11.528993ms to run NodePressure ...
	I1206 19:56:13.773678  115591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:56:14.157761  115591 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1206 19:56:14.169588  115591 kubeadm.go:787] kubelet initialised
	I1206 19:56:14.169632  115591 kubeadm.go:788] duration metric: took 11.756226ms waiting for restarted kubelet to initialise ...
	I1206 19:56:14.169644  115591 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 19:56:14.186031  115591 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-cpst4" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:14.211717  115591 pod_ready.go:97] node "embed-certs-209025" hosting pod "coredns-5dd5756b68-cpst4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-209025" has status "Ready":"False"
	I1206 19:56:14.211747  115591 pod_ready.go:81] duration metric: took 25.681607ms waiting for pod "coredns-5dd5756b68-cpst4" in "kube-system" namespace to be "Ready" ...
	E1206 19:56:14.211759  115591 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-209025" hosting pod "coredns-5dd5756b68-cpst4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-209025" has status "Ready":"False"
	I1206 19:56:14.211769  115591 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:14.219369  115591 pod_ready.go:97] node "embed-certs-209025" hosting pod "etcd-embed-certs-209025" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-209025" has status "Ready":"False"
	I1206 19:56:14.219396  115591 pod_ready.go:81] duration metric: took 7.594898ms waiting for pod "etcd-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	E1206 19:56:14.219408  115591 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-209025" hosting pod "etcd-embed-certs-209025" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-209025" has status "Ready":"False"
	I1206 19:56:14.219425  115591 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:14.233417  115591 pod_ready.go:97] node "embed-certs-209025" hosting pod "kube-apiserver-embed-certs-209025" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-209025" has status "Ready":"False"
	I1206 19:56:14.233513  115591 pod_ready.go:81] duration metric: took 14.073312ms waiting for pod "kube-apiserver-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	E1206 19:56:14.233535  115591 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-209025" hosting pod "kube-apiserver-embed-certs-209025" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-209025" has status "Ready":"False"
	I1206 19:56:14.233546  115591 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:14.244480  115591 pod_ready.go:97] node "embed-certs-209025" hosting pod "kube-controller-manager-embed-certs-209025" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-209025" has status "Ready":"False"
	I1206 19:56:14.244516  115591 pod_ready.go:81] duration metric: took 10.958431ms waiting for pod "kube-controller-manager-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	E1206 19:56:14.244530  115591 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-209025" hosting pod "kube-controller-manager-embed-certs-209025" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-209025" has status "Ready":"False"
	I1206 19:56:14.244537  115591 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-pt8nj" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:12.606102  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetIP
	I1206 19:56:12.609040  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:12.609395  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:12.609436  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:12.609665  115078 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1206 19:56:12.615279  115078 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 19:56:12.629571  115078 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime crio
	I1206 19:56:12.629641  115078 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 19:56:12.674728  115078 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.1". assuming images are not preloaded.
	I1206 19:56:12.674763  115078 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.1 registry.k8s.io/kube-controller-manager:v1.29.0-rc.1 registry.k8s.io/kube-scheduler:v1.29.0-rc.1 registry.k8s.io/kube-proxy:v1.29.0-rc.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1206 19:56:12.674870  115078 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 19:56:12.674886  115078 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1206 19:56:12.674910  115078 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I1206 19:56:12.674923  115078 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1206 19:56:12.674965  115078 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1206 19:56:12.674885  115078 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1206 19:56:12.674998  115078 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I1206 19:56:12.674889  115078 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1206 19:56:12.676510  115078 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 19:56:12.676539  115078 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1206 19:56:12.676563  115078 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1206 19:56:12.676576  115078 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1206 19:56:12.676511  115078 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I1206 19:56:12.676599  115078 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I1206 19:56:12.676624  115078 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1206 19:56:12.676642  115078 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1206 19:56:12.862606  115078 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1206 19:56:12.882993  115078 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I1206 19:56:12.884387  115078 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I1206 19:56:12.900149  115078 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 19:56:12.909389  115078 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1206 19:56:12.916391  115078 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1206 19:56:12.924669  115078 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1206 19:56:12.946885  115078 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1206 19:56:13.028628  115078 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I1206 19:56:13.028685  115078 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I1206 19:56:13.028741  115078 ssh_runner.go:195] Run: which crictl
	I1206 19:56:13.095076  115078 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I1206 19:56:13.095139  115078 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I1206 19:56:13.095289  115078 ssh_runner.go:195] Run: which crictl
	I1206 19:56:13.136956  115078 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.1" does not exist at hash "b953575c86991f5317a5f4b7e3f43c8a43d771ca400d96ba473f43c9a1ab3542" in container runtime
	I1206 19:56:13.137003  115078 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1206 19:56:13.137074  115078 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 19:56:13.137130  115078 ssh_runner.go:195] Run: which crictl
	I1206 19:56:13.137005  115078 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1206 19:56:13.137268  115078 ssh_runner.go:195] Run: which crictl
	I1206 19:56:13.146913  115078 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.1" does not exist at hash "b7918a1eaa37ee8f677a10278009aa059854630785fbb7306140641494aeec09" in container runtime
	I1206 19:56:13.146970  115078 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1206 19:56:13.147024  115078 ssh_runner.go:195] Run: which crictl
	I1206 19:56:13.159866  115078 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.1" does not exist at hash "86e0ea23640eb90027102f4e4ff188211c290910bcd9af7402fd429d43d281ff" in container runtime
	I1206 19:56:13.159913  115078 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1206 19:56:13.159963  115078 ssh_runner.go:195] Run: which crictl
	I1206 19:56:13.162288  115078 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I1206 19:56:13.162330  115078 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.1" does not exist at hash "5c4a644510d6f168869cf0620dc09abd02a6fe054538f92a12c7fd7365ffb956" in container runtime
	I1206 19:56:13.162375  115078 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I1206 19:56:13.162378  115078 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1206 19:56:13.162399  115078 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 19:56:13.162407  115078 ssh_runner.go:195] Run: which crictl
	I1206 19:56:13.162523  115078 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1206 19:56:13.162523  115078 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1206 19:56:13.165637  115078 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1206 19:56:13.319155  115078 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I1206 19:56:13.319253  115078 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1206 19:56:13.319274  115078 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I1206 19:56:13.319300  115078 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1206 19:56:13.319371  115078 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.1
	I1206 19:56:13.319394  115078 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1206 19:56:13.319405  115078 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I1206 19:56:13.319423  115078 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1
	I1206 19:56:13.319472  115078 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I1206 19:56:13.319495  115078 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.1
	I1206 19:56:13.319545  115078 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.1
	I1206 19:56:13.319621  115078 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1
	I1206 19:56:13.319546  115078 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1
	I1206 19:56:13.376009  115078 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1 (exists)
	I1206 19:56:13.376036  115078 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1
	I1206 19:56:13.376100  115078 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1
	I1206 19:56:13.376145  115078 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I1206 19:56:13.376179  115078 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1206 19:56:13.376217  115078 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I1206 19:56:13.376273  115078 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1 (exists)
	I1206 19:56:13.376302  115078 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1 (exists)
	I1206 19:56:13.376329  115078 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.1
	I1206 19:56:13.376423  115078 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1
	I1206 19:56:15.530421  115078 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1: (2.153965348s)
	I1206 19:56:15.530466  115078 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1 (exists)
	I1206 19:56:15.530502  115078 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1: (2.154372843s)
	I1206 19:56:15.530536  115078 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.1 from cache
	I1206 19:56:15.530571  115078 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I1206 19:56:15.530630  115078 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I1206 19:56:13.177508  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:15.671903  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:14.963353  115591 pod_ready.go:92] pod "kube-proxy-pt8nj" in "kube-system" namespace has status "Ready":"True"
	I1206 19:56:14.963382  115591 pod_ready.go:81] duration metric: took 718.835702ms waiting for pod "kube-proxy-pt8nj" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:14.963391  115591 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:17.284373  115591 pod_ready.go:102] pod "kube-scheduler-embed-certs-209025" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:19.354814  115078 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.824152707s)
	I1206 19:56:19.354846  115078 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I1206 19:56:19.354874  115078 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1206 19:56:19.354924  115078 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1206 19:56:20.402300  115078 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.047341059s)
	I1206 19:56:20.402334  115078 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1206 19:56:20.402378  115078 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I1206 19:56:20.402442  115078 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I1206 19:56:17.672489  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:20.171526  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:19.771500  115591 pod_ready.go:102] pod "kube-scheduler-embed-certs-209025" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:22.273627  115591 pod_ready.go:102] pod "kube-scheduler-embed-certs-209025" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:23.269993  115591 pod_ready.go:92] pod "kube-scheduler-embed-certs-209025" in "kube-system" namespace has status "Ready":"True"
	I1206 19:56:23.270019  115591 pod_ready.go:81] duration metric: took 8.306621129s waiting for pod "kube-scheduler-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:23.270029  115591 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:22.575204  115078 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.17273177s)
	I1206 19:56:22.575240  115078 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I1206 19:56:22.575270  115078 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1
	I1206 19:56:22.575318  115078 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1
	I1206 19:56:25.335616  115078 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1: (2.760267154s)
	I1206 19:56:25.335646  115078 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.1 from cache
	I1206 19:56:25.335680  115078 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1
	I1206 19:56:25.335760  115078 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1
	I1206 19:56:22.175410  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:24.677136  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:23.486162  115217 kubeadm.go:787] kubelet initialised
	I1206 19:56:23.486192  115217 kubeadm.go:788] duration metric: took 47.560169603s waiting for restarted kubelet to initialise ...
	I1206 19:56:23.486203  115217 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 19:56:23.491797  115217 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-85xcj" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:23.499126  115217 pod_ready.go:92] pod "coredns-5644d7b6d9-85xcj" in "kube-system" namespace has status "Ready":"True"
	I1206 19:56:23.499149  115217 pod_ready.go:81] duration metric: took 7.327003ms waiting for pod "coredns-5644d7b6d9-85xcj" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:23.499160  115217 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-nrtk9" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:23.503979  115217 pod_ready.go:92] pod "coredns-5644d7b6d9-nrtk9" in "kube-system" namespace has status "Ready":"True"
	I1206 19:56:23.504002  115217 pod_ready.go:81] duration metric: took 4.834039ms waiting for pod "coredns-5644d7b6d9-nrtk9" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:23.504014  115217 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-448851" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:23.509110  115217 pod_ready.go:92] pod "etcd-old-k8s-version-448851" in "kube-system" namespace has status "Ready":"True"
	I1206 19:56:23.509132  115217 pod_ready.go:81] duration metric: took 5.109845ms waiting for pod "etcd-old-k8s-version-448851" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:23.509153  115217 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-448851" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:23.514641  115217 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-448851" in "kube-system" namespace has status "Ready":"True"
	I1206 19:56:23.514665  115217 pod_ready.go:81] duration metric: took 5.502762ms waiting for pod "kube-apiserver-old-k8s-version-448851" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:23.514677  115217 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-448851" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:23.886694  115217 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-448851" in "kube-system" namespace has status "Ready":"True"
	I1206 19:56:23.886726  115217 pod_ready.go:81] duration metric: took 372.040617ms waiting for pod "kube-controller-manager-old-k8s-version-448851" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:23.886741  115217 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-sw4qv" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:24.287638  115217 pod_ready.go:92] pod "kube-proxy-sw4qv" in "kube-system" namespace has status "Ready":"True"
	I1206 19:56:24.287662  115217 pod_ready.go:81] duration metric: took 400.914693ms waiting for pod "kube-proxy-sw4qv" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:24.287673  115217 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-448851" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:24.688298  115217 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-448851" in "kube-system" namespace has status "Ready":"True"
	I1206 19:56:24.688328  115217 pod_ready.go:81] duration metric: took 400.645544ms waiting for pod "kube-scheduler-old-k8s-version-448851" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:24.688343  115217 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:26.991669  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:25.288536  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:27.290135  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:29.291318  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:27.610095  115078 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1: (2.274298339s)
	I1206 19:56:27.610132  115078 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.1 from cache
	I1206 19:56:27.610163  115078 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1
	I1206 19:56:27.610219  115078 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1
	I1206 19:56:30.272712  115078 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1: (2.662458967s)
	I1206 19:56:30.272746  115078 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.1 from cache
	I1206 19:56:30.272782  115078 cache_images.go:123] Successfully loaded all cached images
	I1206 19:56:30.272789  115078 cache_images.go:92] LoadImages completed in 17.598011028s
	I1206 19:56:30.272883  115078 ssh_runner.go:195] Run: crio config
	I1206 19:56:30.341321  115078 cni.go:84] Creating CNI manager for ""
	I1206 19:56:30.341346  115078 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 19:56:30.341368  115078 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1206 19:56:30.341392  115078 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.5 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-989559 NodeName:no-preload-989559 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 19:56:30.341597  115078 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-989559"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 19:56:30.341693  115078 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-989559 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.1 ClusterName:no-preload-989559 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1206 19:56:30.341758  115078 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.1
	I1206 19:56:30.351650  115078 binaries.go:44] Found k8s binaries, skipping transfer
	I1206 19:56:30.351729  115078 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 19:56:30.360413  115078 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1206 19:56:30.376399  115078 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1206 19:56:30.392522  115078 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I1206 19:56:30.409313  115078 ssh_runner.go:195] Run: grep 192.168.39.5	control-plane.minikube.internal$ /etc/hosts
	I1206 19:56:30.413355  115078 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.5	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 19:56:30.426797  115078 certs.go:56] Setting up /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/no-preload-989559 for IP: 192.168.39.5
	I1206 19:56:30.426854  115078 certs.go:190] acquiring lock for shared ca certs: {Name:mkf8fbf7b590617ef4dc6c3a4acb742ae26f89ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 19:56:30.427070  115078 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.key
	I1206 19:56:30.427134  115078 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.key
	I1206 19:56:30.427240  115078 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/no-preload-989559/client.key
	I1206 19:56:30.427311  115078 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/no-preload-989559/apiserver.key.c9b343a5
	I1206 19:56:30.427350  115078 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/no-preload-989559/proxy-client.key
	I1206 19:56:30.427454  115078 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834.pem (1338 bytes)
	W1206 19:56:30.427508  115078 certs.go:433] ignoring /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834_empty.pem, impossibly tiny 0 bytes
	I1206 19:56:30.427521  115078 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 19:56:30.427550  115078 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem (1082 bytes)
	I1206 19:56:30.427571  115078 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem (1123 bytes)
	I1206 19:56:30.427593  115078 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem (1679 bytes)
	I1206 19:56:30.427634  115078 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem (1708 bytes)
	I1206 19:56:30.428313  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/no-preload-989559/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1206 19:56:30.452268  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/no-preload-989559/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 19:56:30.476793  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/no-preload-989559/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 19:56:30.503751  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/no-preload-989559/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 19:56:30.530680  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 19:56:30.557770  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 19:56:30.582244  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 19:56:30.608096  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 19:56:30.634585  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834.pem --> /usr/share/ca-certificates/70834.pem (1338 bytes)
	I1206 19:56:30.660669  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem --> /usr/share/ca-certificates/708342.pem (1708 bytes)
	I1206 19:56:30.686987  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 19:56:30.711098  115078 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 19:56:30.727576  115078 ssh_runner.go:195] Run: openssl version
	I1206 19:56:30.733568  115078 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/708342.pem && ln -fs /usr/share/ca-certificates/708342.pem /etc/ssl/certs/708342.pem"
	I1206 19:56:30.743777  115078 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/708342.pem
	I1206 19:56:30.748976  115078 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  6 18:50 /usr/share/ca-certificates/708342.pem
	I1206 19:56:30.749033  115078 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/708342.pem
	I1206 19:56:30.755465  115078 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/708342.pem /etc/ssl/certs/3ec20f2e.0"
	I1206 19:56:30.766285  115078 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1206 19:56:30.777164  115078 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:56:30.782160  115078 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  6 18:41 /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:56:30.782228  115078 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:56:30.789394  115078 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1206 19:56:30.801293  115078 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/70834.pem && ln -fs /usr/share/ca-certificates/70834.pem /etc/ssl/certs/70834.pem"
	I1206 19:56:30.812646  115078 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/70834.pem
	I1206 19:56:30.818147  115078 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  6 18:50 /usr/share/ca-certificates/70834.pem
	I1206 19:56:30.818209  115078 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/70834.pem
	I1206 19:56:30.824161  115078 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/70834.pem /etc/ssl/certs/51391683.0"
	I1206 19:56:30.834389  115078 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1206 19:56:30.839518  115078 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 19:56:30.845997  115078 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 19:56:30.852229  115078 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 19:56:30.858622  115078 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 19:56:30.864675  115078 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 19:56:30.870945  115078 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 19:56:30.878301  115078 kubeadm.go:404] StartCluster: {Name:no-preload-989559 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.1 ClusterName:no-preload-989559 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.5 Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 19:56:30.878438  115078 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 19:56:30.878513  115078 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 19:56:30.921588  115078 cri.go:89] found id: ""
	I1206 19:56:30.921692  115078 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 19:56:30.932160  115078 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1206 19:56:30.932190  115078 kubeadm.go:636] restartCluster start
	I1206 19:56:30.932264  115078 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 19:56:30.942019  115078 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:30.943237  115078 kubeconfig.go:92] found "no-preload-989559" server: "https://192.168.39.5:8443"
	I1206 19:56:30.945618  115078 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 19:56:30.954582  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:30.954655  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:30.966532  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:30.966555  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:30.966602  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:30.979930  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:27.172625  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:29.671318  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:28.992218  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:30.994420  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:31.786922  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:33.787251  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:31.480021  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:31.480135  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:31.493287  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:31.980317  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:31.980409  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:31.994348  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:32.480929  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:32.481020  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:32.494940  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:32.980449  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:32.980559  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:32.993316  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:33.481040  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:33.481150  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:33.494210  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:33.980837  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:33.980936  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:33.994280  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:34.480389  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:34.480492  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:34.493915  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:34.980458  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:34.980569  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:34.994306  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:35.480788  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:35.480897  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:35.495397  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:35.980815  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:35.980919  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:32.171889  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:34.669989  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:33.491932  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:35.492626  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:37.991389  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:35.787950  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:38.288581  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	W1206 19:56:35.994848  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:36.480833  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:36.480959  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:36.496053  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:36.980074  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:36.980197  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:36.994615  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:37.480110  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:37.480203  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:37.494380  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:37.980922  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:37.981009  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:37.994865  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:38.480432  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:38.480536  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:38.494938  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:38.980148  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:38.980250  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:38.995427  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:39.481067  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:39.481153  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:39.494631  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:39.980142  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:39.980255  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:39.991638  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:40.480132  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:40.480205  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:40.492507  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:40.955413  115078 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1206 19:56:40.955478  115078 kubeadm.go:1135] stopping kube-system containers ...
	I1206 19:56:40.955492  115078 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1206 19:56:40.955574  115078 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 19:56:36.673986  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:39.172561  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:41.177049  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:40.490976  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:42.492210  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:40.293997  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:42.789693  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:40.997724  115078 cri.go:89] found id: ""
	I1206 19:56:40.997783  115078 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1206 19:56:41.013137  115078 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 19:56:41.021612  115078 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 19:56:41.021667  115078 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 19:56:41.030846  115078 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1206 19:56:41.030878  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:56:41.160850  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:56:42.395616  115078 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.234715721s)
	I1206 19:56:42.395651  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:56:42.595187  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:56:42.688245  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:56:42.769464  115078 api_server.go:52] waiting for apiserver process to appear ...
	I1206 19:56:42.769566  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:42.783010  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:43.303551  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:43.803070  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:44.303922  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:44.803326  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:45.302954  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:45.323804  115078 api_server.go:72] duration metric: took 2.55435107s to wait for apiserver process to appear ...
	I1206 19:56:45.323839  115078 api_server.go:88] waiting for apiserver healthz status ...
	I1206 19:56:45.323865  115078 api_server.go:253] Checking apiserver healthz at https://192.168.39.5:8443/healthz ...
	I1206 19:56:45.324588  115078 api_server.go:269] stopped: https://192.168.39.5:8443/healthz: Get "https://192.168.39.5:8443/healthz": dial tcp 192.168.39.5:8443: connect: connection refused
	I1206 19:56:45.324632  115078 api_server.go:253] Checking apiserver healthz at https://192.168.39.5:8443/healthz ...
	I1206 19:56:45.325115  115078 api_server.go:269] stopped: https://192.168.39.5:8443/healthz: Get "https://192.168.39.5:8443/healthz": dial tcp 192.168.39.5:8443: connect: connection refused
	I1206 19:56:45.825883  115078 api_server.go:253] Checking apiserver healthz at https://192.168.39.5:8443/healthz ...
	I1206 19:56:43.670089  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:45.670833  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:44.994670  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:47.492548  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:45.288109  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:47.788636  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:49.759033  115078 api_server.go:279] https://192.168.39.5:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1206 19:56:49.759089  115078 api_server.go:103] status: https://192.168.39.5:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1206 19:56:49.759117  115078 api_server.go:253] Checking apiserver healthz at https://192.168.39.5:8443/healthz ...
	I1206 19:56:49.778467  115078 api_server.go:279] https://192.168.39.5:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1206 19:56:49.778502  115078 api_server.go:103] status: https://192.168.39.5:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1206 19:56:49.825793  115078 api_server.go:253] Checking apiserver healthz at https://192.168.39.5:8443/healthz ...
	I1206 19:56:49.888751  115078 api_server.go:279] https://192.168.39.5:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1206 19:56:49.888801  115078 api_server.go:103] status: https://192.168.39.5:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1206 19:56:50.325211  115078 api_server.go:253] Checking apiserver healthz at https://192.168.39.5:8443/healthz ...
	I1206 19:56:50.330395  115078 api_server.go:279] https://192.168.39.5:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1206 19:56:50.330438  115078 api_server.go:103] status: https://192.168.39.5:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1206 19:56:50.826038  115078 api_server.go:253] Checking apiserver healthz at https://192.168.39.5:8443/healthz ...
	I1206 19:56:50.830801  115078 api_server.go:279] https://192.168.39.5:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1206 19:56:50.830836  115078 api_server.go:103] status: https://192.168.39.5:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1206 19:56:51.325298  115078 api_server.go:253] Checking apiserver healthz at https://192.168.39.5:8443/healthz ...
	I1206 19:56:51.331295  115078 api_server.go:279] https://192.168.39.5:8443/healthz returned 200:
	ok
	I1206 19:56:51.340412  115078 api_server.go:141] control plane version: v1.29.0-rc.1
	I1206 19:56:51.340445  115078 api_server.go:131] duration metric: took 6.016598018s to wait for apiserver health ...
	I1206 19:56:51.340457  115078 cni.go:84] Creating CNI manager for ""
	I1206 19:56:51.340465  115078 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 19:56:51.383227  115078 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1206 19:56:47.671090  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:50.173835  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:49.494360  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:51.991886  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:51.385027  115078 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1206 19:56:51.399942  115078 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1206 19:56:51.422533  115078 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 19:56:51.446615  115078 system_pods.go:59] 8 kube-system pods found
	I1206 19:56:51.446661  115078 system_pods.go:61] "coredns-76f75df574-h9pkz" [05501356-bf9b-4a99-a1b9-40d0caef38db] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 19:56:51.446671  115078 system_pods.go:61] "etcd-no-preload-989559" [6c1cb748-a6a8-4583-b8fd-adf37e05b771] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1206 19:56:51.446684  115078 system_pods.go:61] "kube-apiserver-no-preload-989559" [51d8b7c6-0cef-4832-96b2-5040c0725310] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1206 19:56:51.446698  115078 system_pods.go:61] "kube-controller-manager-no-preload-989559" [cc8dfb88-9990-488f-9150-5c643143dcf1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 19:56:51.446707  115078 system_pods.go:61] "kube-proxy-zgqvt" [550b2491-c14f-47c4-82d5-1301fa351305] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1206 19:56:51.446716  115078 system_pods.go:61] "kube-scheduler-no-preload-989559" [53a5031e-51aa-4867-88ff-1c7972a0cfa7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1206 19:56:51.446731  115078 system_pods.go:61] "metrics-server-57f55c9bc5-vz7qc" [97c1bcd2-eabc-4029-bb02-5bbfd4d96c0f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 19:56:51.446739  115078 system_pods.go:61] "storage-provisioner" [c4d98de3-12ec-47f6-a6a6-f1dc61b479be] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 19:56:51.446749  115078 system_pods.go:74] duration metric: took 24.188803ms to wait for pod list to return data ...
	I1206 19:56:51.446758  115078 node_conditions.go:102] verifying NodePressure condition ...
	I1206 19:56:51.452770  115078 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1206 19:56:51.452803  115078 node_conditions.go:123] node cpu capacity is 2
	I1206 19:56:51.452817  115078 node_conditions.go:105] duration metric: took 6.05327ms to run NodePressure ...
	I1206 19:56:51.452840  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:56:51.740786  115078 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1206 19:56:51.746512  115078 kubeadm.go:787] kubelet initialised
	I1206 19:56:51.746541  115078 kubeadm.go:788] duration metric: took 5.720787ms waiting for restarted kubelet to initialise ...
	I1206 19:56:51.746550  115078 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 19:56:51.752751  115078 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-h9pkz" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:51.761003  115078 pod_ready.go:97] node "no-preload-989559" hosting pod "coredns-76f75df574-h9pkz" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:51.761032  115078 pod_ready.go:81] duration metric: took 8.254381ms waiting for pod "coredns-76f75df574-h9pkz" in "kube-system" namespace to be "Ready" ...
	E1206 19:56:51.761043  115078 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-989559" hosting pod "coredns-76f75df574-h9pkz" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:51.761052  115078 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:51.766223  115078 pod_ready.go:97] node "no-preload-989559" hosting pod "etcd-no-preload-989559" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:51.766248  115078 pod_ready.go:81] duration metric: took 5.184525ms waiting for pod "etcd-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	E1206 19:56:51.766259  115078 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-989559" hosting pod "etcd-no-preload-989559" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:51.766271  115078 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:51.771516  115078 pod_ready.go:97] node "no-preload-989559" hosting pod "kube-apiserver-no-preload-989559" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:51.771541  115078 pod_ready.go:81] duration metric: took 5.262069ms waiting for pod "kube-apiserver-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	E1206 19:56:51.771552  115078 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-989559" hosting pod "kube-apiserver-no-preload-989559" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:51.771561  115078 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:51.827774  115078 pod_ready.go:97] node "no-preload-989559" hosting pod "kube-controller-manager-no-preload-989559" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:51.827804  115078 pod_ready.go:81] duration metric: took 56.232455ms waiting for pod "kube-controller-manager-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	E1206 19:56:51.827818  115078 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-989559" hosting pod "kube-controller-manager-no-preload-989559" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:51.827826  115078 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zgqvt" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:52.231699  115078 pod_ready.go:97] node "no-preload-989559" hosting pod "kube-proxy-zgqvt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:52.231761  115078 pod_ready.go:81] duration metric: took 403.922333ms waiting for pod "kube-proxy-zgqvt" in "kube-system" namespace to be "Ready" ...
	E1206 19:56:52.231774  115078 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-989559" hosting pod "kube-proxy-zgqvt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:52.231790  115078 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:52.626827  115078 pod_ready.go:97] node "no-preload-989559" hosting pod "kube-scheduler-no-preload-989559" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:52.626863  115078 pod_ready.go:81] duration metric: took 395.06457ms waiting for pod "kube-scheduler-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	E1206 19:56:52.626877  115078 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-989559" hosting pod "kube-scheduler-no-preload-989559" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:52.626889  115078 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:53.028166  115078 pod_ready.go:97] node "no-preload-989559" hosting pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:53.028201  115078 pod_ready.go:81] duration metric: took 401.294916ms waiting for pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace to be "Ready" ...
	E1206 19:56:53.028214  115078 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-989559" hosting pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:53.028226  115078 pod_ready.go:38] duration metric: took 1.281664253s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 19:56:53.028249  115078 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 19:56:53.057673  115078 ops.go:34] apiserver oom_adj: -16
	I1206 19:56:53.057706  115078 kubeadm.go:640] restartCluster took 22.12550727s
	I1206 19:56:53.057718  115078 kubeadm.go:406] StartCluster complete in 22.179430573s
	I1206 19:56:53.057756  115078 settings.go:142] acquiring lock: {Name:mkfeb988d43ca5824ac2b3af603600358ae0dd6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 19:56:53.057857  115078 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17740-63652/kubeconfig
	I1206 19:56:53.059885  115078 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-63652/kubeconfig: {Name:mkb891a2b2c86b4a1b0f4bb8fd4e51233eb9c683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 19:56:53.060125  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 19:56:53.060244  115078 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1206 19:56:53.060337  115078 addons.go:69] Setting storage-provisioner=true in profile "no-preload-989559"
	I1206 19:56:53.060364  115078 addons.go:231] Setting addon storage-provisioner=true in "no-preload-989559"
	I1206 19:56:53.060367  115078 config.go:182] Loaded profile config "no-preload-989559": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.1
	W1206 19:56:53.060375  115078 addons.go:240] addon storage-provisioner should already be in state true
	I1206 19:56:53.060404  115078 addons.go:69] Setting default-storageclass=true in profile "no-preload-989559"
	I1206 19:56:53.060415  115078 addons.go:69] Setting metrics-server=true in profile "no-preload-989559"
	I1206 19:56:53.060430  115078 addons.go:231] Setting addon metrics-server=true in "no-preload-989559"
	I1206 19:56:53.060433  115078 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-989559"
	W1206 19:56:53.060440  115078 addons.go:240] addon metrics-server should already be in state true
	I1206 19:56:53.060452  115078 host.go:66] Checking if "no-preload-989559" exists ...
	I1206 19:56:53.060472  115078 host.go:66] Checking if "no-preload-989559" exists ...
	I1206 19:56:53.060856  115078 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:56:53.060865  115078 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:56:53.060889  115078 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:56:53.060894  115078 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:56:53.060917  115078 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:56:53.060894  115078 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:56:53.065950  115078 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-989559" context rescaled to 1 replicas
	I1206 19:56:53.065992  115078 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.5 Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 19:56:53.068038  115078 out.go:177] * Verifying Kubernetes components...
	I1206 19:56:53.069775  115078 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 19:56:53.077795  115078 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34735
	I1206 19:56:53.078120  115078 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46235
	I1206 19:56:53.078274  115078 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:56:53.078714  115078 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:56:53.078902  115078 main.go:141] libmachine: Using API Version  1
	I1206 19:56:53.078928  115078 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:56:53.079207  115078 main.go:141] libmachine: Using API Version  1
	I1206 19:56:53.079226  115078 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:56:53.079272  115078 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:56:53.079514  115078 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:56:53.079727  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetState
	I1206 19:56:53.079865  115078 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:56:53.079899  115078 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:56:53.083670  115078 addons.go:231] Setting addon default-storageclass=true in "no-preload-989559"
	W1206 19:56:53.083695  115078 addons.go:240] addon default-storageclass should already be in state true
	I1206 19:56:53.083724  115078 host.go:66] Checking if "no-preload-989559" exists ...
	I1206 19:56:53.084178  115078 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:56:53.084230  115078 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:56:53.097845  115078 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36625
	I1206 19:56:53.098357  115078 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:56:53.099058  115078 main.go:141] libmachine: Using API Version  1
	I1206 19:56:53.099080  115078 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:56:53.099409  115078 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:56:53.099633  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetState
	I1206 19:56:53.101625  115078 main.go:141] libmachine: (no-preload-989559) Calling .DriverName
	I1206 19:56:53.103641  115078 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1206 19:56:53.105081  115078 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44431
	I1206 19:56:53.105105  115078 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1206 19:56:53.105123  115078 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1206 19:56:53.105150  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHHostname
	I1206 19:56:53.104327  115078 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34423
	I1206 19:56:53.105556  115078 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:56:53.105777  115078 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:56:53.105983  115078 main.go:141] libmachine: Using API Version  1
	I1206 19:56:53.105998  115078 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:56:53.106312  115078 main.go:141] libmachine: Using API Version  1
	I1206 19:56:53.106328  115078 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:56:53.106619  115078 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:56:53.106910  115078 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:56:53.107192  115078 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:56:53.107229  115078 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:56:53.107338  115078 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:56:53.107398  115078 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:56:53.108297  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:53.108969  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:53.108999  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:53.109150  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHPort
	I1206 19:56:53.109436  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:53.109586  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHUsername
	I1206 19:56:53.109725  115078 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/no-preload-989559/id_rsa Username:docker}
	I1206 19:56:53.123985  115078 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46161
	I1206 19:56:53.124496  115078 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:56:53.125052  115078 main.go:141] libmachine: Using API Version  1
	I1206 19:56:53.125078  115078 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:56:53.125325  115078 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36509
	I1206 19:56:53.125570  115078 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:56:53.125785  115078 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:56:53.125826  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetState
	I1206 19:56:53.126385  115078 main.go:141] libmachine: Using API Version  1
	I1206 19:56:53.126413  115078 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:56:53.126875  115078 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:56:53.127050  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetState
	I1206 19:56:53.127923  115078 main.go:141] libmachine: (no-preload-989559) Calling .DriverName
	I1206 19:56:53.128212  115078 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 19:56:53.128226  115078 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 19:56:53.128242  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHHostname
	I1206 19:56:53.128747  115078 main.go:141] libmachine: (no-preload-989559) Calling .DriverName
	I1206 19:56:53.131043  115078 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 19:56:53.131487  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:53.132638  115078 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 19:56:53.132645  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:53.132651  115078 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 19:56:53.132667  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHHostname
	I1206 19:56:53.132682  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:53.132132  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHPort
	I1206 19:56:53.133425  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:53.133636  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHUsername
	I1206 19:56:53.133870  115078 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/no-preload-989559/id_rsa Username:docker}
	I1206 19:56:53.136039  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:53.136583  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:53.136612  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:53.136850  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHPort
	I1206 19:56:53.137087  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:53.137390  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHUsername
	I1206 19:56:53.137583  115078 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/no-preload-989559/id_rsa Username:docker}
	I1206 19:56:53.247726  115078 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1206 19:56:53.247751  115078 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1206 19:56:53.271421  115078 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 19:56:53.296149  115078 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1206 19:56:53.296181  115078 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1206 19:56:53.303580  115078 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 19:56:53.350607  115078 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1206 19:56:53.350607  115078 node_ready.go:35] waiting up to 6m0s for node "no-preload-989559" to be "Ready" ...
	I1206 19:56:53.355315  115078 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 19:56:53.355336  115078 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1206 19:56:53.392730  115078 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 19:56:53.624768  115078 main.go:141] libmachine: Making call to close driver server
	I1206 19:56:53.624798  115078 main.go:141] libmachine: (no-preload-989559) Calling .Close
	I1206 19:56:53.625224  115078 main.go:141] libmachine: Successfully made call to close driver server
	I1206 19:56:53.625330  115078 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 19:56:53.625353  115078 main.go:141] libmachine: Making call to close driver server
	I1206 19:56:53.625393  115078 main.go:141] libmachine: (no-preload-989559) Calling .Close
	I1206 19:56:53.625227  115078 main.go:141] libmachine: (no-preload-989559) DBG | Closing plugin on server side
	I1206 19:56:53.625849  115078 main.go:141] libmachine: Successfully made call to close driver server
	I1206 19:56:53.625874  115078 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 19:56:53.632671  115078 main.go:141] libmachine: Making call to close driver server
	I1206 19:56:53.632691  115078 main.go:141] libmachine: (no-preload-989559) Calling .Close
	I1206 19:56:53.632983  115078 main.go:141] libmachine: Successfully made call to close driver server
	I1206 19:56:53.633005  115078 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 19:56:54.433395  115078 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.12977215s)
	I1206 19:56:54.433462  115078 main.go:141] libmachine: Making call to close driver server
	I1206 19:56:54.433491  115078 main.go:141] libmachine: (no-preload-989559) Calling .Close
	I1206 19:56:54.433360  115078 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.040565961s)
	I1206 19:56:54.433546  115078 main.go:141] libmachine: Making call to close driver server
	I1206 19:56:54.433567  115078 main.go:141] libmachine: (no-preload-989559) Calling .Close
	I1206 19:56:54.433833  115078 main.go:141] libmachine: Successfully made call to close driver server
	I1206 19:56:54.433854  115078 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 19:56:54.433863  115078 main.go:141] libmachine: Making call to close driver server
	I1206 19:56:54.433867  115078 main.go:141] libmachine: (no-preload-989559) DBG | Closing plugin on server side
	I1206 19:56:54.433871  115078 main.go:141] libmachine: (no-preload-989559) Calling .Close
	I1206 19:56:54.433842  115078 main.go:141] libmachine: (no-preload-989559) DBG | Closing plugin on server side
	I1206 19:56:54.433908  115078 main.go:141] libmachine: Successfully made call to close driver server
	I1206 19:56:54.433926  115078 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 19:56:54.433939  115078 main.go:141] libmachine: Making call to close driver server
	I1206 19:56:54.433951  115078 main.go:141] libmachine: (no-preload-989559) Calling .Close
	I1206 19:56:54.434124  115078 main.go:141] libmachine: Successfully made call to close driver server
	I1206 19:56:54.434148  115078 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 19:56:54.434153  115078 main.go:141] libmachine: (no-preload-989559) DBG | Closing plugin on server side
	I1206 19:56:54.434199  115078 main.go:141] libmachine: (no-preload-989559) DBG | Closing plugin on server side
	I1206 19:56:54.434212  115078 main.go:141] libmachine: Successfully made call to close driver server
	I1206 19:56:54.434224  115078 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 19:56:54.434240  115078 addons.go:467] Verifying addon metrics-server=true in "no-preload-989559"
	I1206 19:56:54.437357  115078 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1206 19:56:50.289141  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:52.289568  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:54.438928  115078 addons.go:502] enable addons completed in 1.378684523s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1206 19:56:55.439812  115078 node_ready.go:58] node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:52.174520  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:54.175288  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:54.492713  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:56.493106  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:54.789039  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:57.288485  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:59.289450  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:57.931320  115078 node_ready.go:58] node "no-preload-989559" has status "Ready":"False"
	I1206 19:57:00.430485  115078 node_ready.go:49] node "no-preload-989559" has status "Ready":"True"
	I1206 19:57:00.430517  115078 node_ready.go:38] duration metric: took 7.079875254s waiting for node "no-preload-989559" to be "Ready" ...
	I1206 19:57:00.430530  115078 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 19:57:00.436772  115078 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-h9pkz" in "kube-system" namespace to be "Ready" ...
	I1206 19:57:00.442667  115078 pod_ready.go:92] pod "coredns-76f75df574-h9pkz" in "kube-system" namespace has status "Ready":"True"
	I1206 19:57:00.442688  115078 pod_ready.go:81] duration metric: took 5.884841ms waiting for pod "coredns-76f75df574-h9pkz" in "kube-system" namespace to be "Ready" ...
	I1206 19:57:00.442701  115078 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:56.671845  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:59.172634  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:01.175416  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:58.991760  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:00.992295  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:01.787443  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:03.787988  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:02.468096  115078 pod_ready.go:102] pod "etcd-no-preload-989559" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:04.965881  115078 pod_ready.go:92] pod "etcd-no-preload-989559" in "kube-system" namespace has status "Ready":"True"
	I1206 19:57:04.965905  115078 pod_ready.go:81] duration metric: took 4.523195911s waiting for pod "etcd-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	I1206 19:57:04.965916  115078 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	I1206 19:57:04.971414  115078 pod_ready.go:92] pod "kube-apiserver-no-preload-989559" in "kube-system" namespace has status "Ready":"True"
	I1206 19:57:04.971433  115078 pod_ready.go:81] duration metric: took 5.510214ms waiting for pod "kube-apiserver-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	I1206 19:57:04.971441  115078 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	I1206 19:57:04.977851  115078 pod_ready.go:92] pod "kube-controller-manager-no-preload-989559" in "kube-system" namespace has status "Ready":"True"
	I1206 19:57:04.977870  115078 pod_ready.go:81] duration metric: took 6.422623ms waiting for pod "kube-controller-manager-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	I1206 19:57:04.977878  115078 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zgqvt" in "kube-system" namespace to be "Ready" ...
	I1206 19:57:04.985189  115078 pod_ready.go:92] pod "kube-proxy-zgqvt" in "kube-system" namespace has status "Ready":"True"
	I1206 19:57:04.985215  115078 pod_ready.go:81] duration metric: took 7.330713ms waiting for pod "kube-proxy-zgqvt" in "kube-system" namespace to be "Ready" ...
	I1206 19:57:04.985224  115078 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	I1206 19:57:05.230810  115078 pod_ready.go:92] pod "kube-scheduler-no-preload-989559" in "kube-system" namespace has status "Ready":"True"
	I1206 19:57:05.230835  115078 pod_ready.go:81] duration metric: took 245.59313ms waiting for pod "kube-scheduler-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	I1206 19:57:05.230845  115078 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace to be "Ready" ...
	I1206 19:57:03.189551  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:05.673064  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:03.491815  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:05.991689  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:07.992156  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:05.789026  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:07.789964  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:07.538620  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:10.040533  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:08.171042  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:10.671754  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:10.490556  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:12.491886  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:10.287716  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:12.788212  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:12.538291  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:14.541614  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:12.672138  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:15.171421  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:14.992060  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:17.502730  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:14.788301  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:17.287038  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:19.288646  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:17.038893  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:19.543137  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:17.671258  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:20.170885  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:19.991949  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:22.491591  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:21.787339  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:23.788729  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:22.041590  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:24.540137  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:22.171069  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:24.670919  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:24.992198  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:27.492171  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:26.290524  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:28.787761  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:27.039132  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:29.542736  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:27.170762  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:29.171345  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:29.992006  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:32.490556  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:31.288189  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:33.787785  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:32.039418  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:34.039727  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:31.670563  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:34.170705  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:36.171236  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:34.492161  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:36.492522  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:35.788140  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:37.788283  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:36.540765  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:39.038645  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:38.171622  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:40.670580  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:38.990433  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:40.990810  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:42.992228  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:40.287403  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:42.287578  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:44.287701  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:41.039767  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:43.539800  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:45.543374  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:43.173769  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:45.670574  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:44.995625  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:47.492316  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:46.289397  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:48.787659  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:48.038286  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:50.039013  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:48.176705  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:50.670177  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:49.991919  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:52.491478  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:50.788175  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:53.288824  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:52.040785  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:54.538521  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:53.173256  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:55.670940  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:54.492526  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:56.493207  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:55.787745  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:57.788237  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:56.539097  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:59.039024  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:58.174463  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:00.674095  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:58.990652  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:00.993255  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:59.788454  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:02.287774  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:04.288180  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:01.042813  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:03.541670  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:03.171100  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:05.673480  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:03.492375  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:05.991094  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:07.992159  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:06.288916  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:08.289817  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:06.038556  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:08.038962  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:10.539560  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:08.171785  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:10.671152  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:09.993042  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:12.491776  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:10.790823  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:12.791724  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:12.540234  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:14.542433  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:12.672062  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:15.170654  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:14.993921  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:17.492163  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:15.289223  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:17.787808  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:17.038754  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:19.039749  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:17.171210  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:19.670633  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:19.991157  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:21.991531  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:19.788614  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:22.288567  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:21.040007  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:23.047504  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:25.539859  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:21.671920  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:24.173543  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:23.993354  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:26.491975  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:24.789151  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:26.789703  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:29.287981  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:28.038595  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:30.039044  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:26.670809  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:29.171281  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:28.492552  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:30.990797  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:32.991467  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:31.289190  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:33.788860  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:32.046392  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:34.538829  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:31.671784  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:33.672095  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:36.171077  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:34.992478  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:37.492021  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:35.789666  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:38.287860  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:37.038795  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:39.537643  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:38.670088  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:41.171066  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:39.991754  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:41.994379  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:40.288183  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:42.788826  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:41.539212  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:43.543524  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:43.674139  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:46.170213  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:44.491092  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:46.491632  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:45.287473  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:47.288157  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:49.289525  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:46.038254  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:48.039117  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:50.039290  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:48.170319  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:50.671091  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:48.492359  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:50.992132  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:51.787368  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:53.788448  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:52.039474  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:54.540427  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:53.169921  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:55.171727  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:53.492764  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:55.993038  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:56.287644  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:58.288171  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:57.038915  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:59.039626  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:57.671011  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:59.671928  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:58.491565  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:00.492398  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:02.994198  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:00.788591  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:02.789729  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:01.540414  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:03.547448  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:02.172546  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:04.670363  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:05.492399  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:07.991600  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:05.287805  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:07.289128  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:06.039393  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:08.040259  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:10.541882  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:06.670653  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:09.172460  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:10.491981  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:12.991797  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:09.788064  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:12.291318  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:12.544283  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:15.040829  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:11.673737  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:14.172972  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:14.992556  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:17.492610  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:14.788287  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:16.789265  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:19.287925  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:17.542363  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:20.039068  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:16.674724  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:18.675236  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:21.170028  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:19.493199  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:21.992164  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:21.288023  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:23.289315  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:22.539662  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:25.038813  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:23.170153  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:25.172299  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:24.491811  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:26.492671  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:25.788309  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:27.791911  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:27.539832  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:29.540277  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:27.671148  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:30.171591  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:28.990920  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:30.992085  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:32.992394  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:30.288522  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:32.288574  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:31.542448  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:34.039116  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:32.671751  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:35.169968  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:35.492708  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:37.992344  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:34.787925  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:36.788270  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:38.788369  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:36.539113  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:39.040215  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:37.171340  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:39.171482  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:40.491091  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:42.491915  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:40.789138  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:43.287352  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:41.538818  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:43.539787  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:41.670936  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:43.671019  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:45.671158  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:44.992666  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:47.491581  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:45.287493  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:47.787403  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:46.039500  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:48.538497  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:50.539750  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:48.171563  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:50.673901  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:49.991083  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:51.991943  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:49.788072  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:51.788139  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:53.788885  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:53.039532  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:55.539183  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:53.177102  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:55.670778  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:53.992408  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:56.492592  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:56.288587  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:58.288722  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:57.539766  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:00.038890  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:58.171948  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:00.173211  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:58.492926  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:00.992517  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:02.992971  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:00.291465  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:02.292084  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:02.039986  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:04.541022  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:02.674513  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:04.407290  115497 pod_ready.go:81] duration metric: took 4m0.000215571s waiting for pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace to be "Ready" ...
	E1206 20:00:04.407325  115497 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1206 20:00:04.407343  115497 pod_ready.go:38] duration metric: took 4m12.62023597s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 20:00:04.407376  115497 kubeadm.go:640] restartCluster took 4m33.115368763s
	W1206 20:00:04.407460  115497 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1206 20:00:04.407558  115497 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1206 20:00:05.492129  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:07.493228  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:04.788290  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:06.789396  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:08.789507  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:06.541064  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:09.040499  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:09.992817  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:12.492671  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:11.288813  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:13.788228  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:11.540420  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:13.540837  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:14.492803  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:16.991852  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:18.762771  115497 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.35517444s)
	I1206 20:00:18.762878  115497 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 20:00:18.777691  115497 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 20:00:18.788508  115497 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 20:00:18.798417  115497 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 20:00:18.798483  115497 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1206 20:00:18.858377  115497 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1206 20:00:18.858486  115497 kubeadm.go:322] [preflight] Running pre-flight checks
	I1206 20:00:19.020664  115497 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 20:00:19.020845  115497 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 20:00:19.020979  115497 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1206 20:00:19.294254  115497 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 20:00:15.788560  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:18.288173  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:19.296186  115497 out.go:204]   - Generating certificates and keys ...
	I1206 20:00:19.296294  115497 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1206 20:00:19.296394  115497 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1206 20:00:19.296512  115497 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 20:00:19.296601  115497 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1206 20:00:19.296712  115497 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 20:00:19.296779  115497 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1206 20:00:19.296938  115497 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1206 20:00:19.297044  115497 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1206 20:00:19.297141  115497 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 20:00:19.297228  115497 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 20:00:19.297296  115497 kubeadm.go:322] [certs] Using the existing "sa" key
	I1206 20:00:19.297374  115497 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 20:00:19.401712  115497 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 20:00:19.667664  115497 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 20:00:19.977926  115497 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 20:00:20.161984  115497 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 20:00:20.162704  115497 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 20:00:20.165273  115497 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 20:00:16.040687  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:18.540495  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:20.167168  115497 out.go:204]   - Booting up control plane ...
	I1206 20:00:20.167327  115497 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 20:00:20.167488  115497 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 20:00:20.167596  115497 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 20:00:20.186839  115497 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 20:00:20.187950  115497 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 20:00:20.188122  115497 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1206 20:00:20.329099  115497 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1206 20:00:18.991946  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:21.490687  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:20.290780  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:22.293161  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:23.270450  115591 pod_ready.go:81] duration metric: took 4m0.000401122s waiting for pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace to be "Ready" ...
	E1206 20:00:23.270504  115591 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1206 20:00:23.270527  115591 pod_ready.go:38] duration metric: took 4m9.100871827s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 20:00:23.270576  115591 kubeadm.go:640] restartCluster took 4m28.999844958s
	W1206 20:00:23.270666  115591 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1206 20:00:23.270705  115591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1206 20:00:21.040410  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:23.041625  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:25.044168  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:23.492875  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:24.689131  115217 pod_ready.go:81] duration metric: took 4m0.000750192s waiting for pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace to be "Ready" ...
	E1206 20:00:24.689173  115217 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1206 20:00:24.689203  115217 pod_ready.go:38] duration metric: took 4m1.202987977s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 20:00:24.689247  115217 kubeadm.go:640] restartCluster took 5m10.459408033s
	W1206 20:00:24.689356  115217 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1206 20:00:24.689392  115217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1206 20:00:29.334312  115497 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.004152 seconds
	I1206 20:00:29.334473  115497 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 20:00:29.360390  115497 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 20:00:29.898911  115497 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 20:00:29.899167  115497 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-380424 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1206 20:00:30.416589  115497 kubeadm.go:322] [bootstrap-token] Using token: gsw79m.btql0t11yc11efah
	I1206 20:00:30.418388  115497 out.go:204]   - Configuring RBAC rules ...
	I1206 20:00:30.418538  115497 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 20:00:30.424651  115497 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1206 20:00:30.439637  115497 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 20:00:30.443854  115497 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 20:00:30.448439  115497 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 20:00:30.454084  115497 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 20:00:30.473340  115497 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1206 20:00:30.748803  115497 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1206 20:00:30.835721  115497 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1206 20:00:30.837289  115497 kubeadm.go:322] 
	I1206 20:00:30.837362  115497 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1206 20:00:30.837381  115497 kubeadm.go:322] 
	I1206 20:00:30.837449  115497 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1206 20:00:30.837457  115497 kubeadm.go:322] 
	I1206 20:00:30.837485  115497 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1206 20:00:30.837597  115497 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 20:00:30.837675  115497 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 20:00:30.837684  115497 kubeadm.go:322] 
	I1206 20:00:30.837760  115497 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1206 20:00:30.837770  115497 kubeadm.go:322] 
	I1206 20:00:30.837826  115497 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1206 20:00:30.837833  115497 kubeadm.go:322] 
	I1206 20:00:30.837899  115497 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1206 20:00:30.838016  115497 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 20:00:30.838114  115497 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 20:00:30.838124  115497 kubeadm.go:322] 
	I1206 20:00:30.838224  115497 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1206 20:00:30.838316  115497 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1206 20:00:30.838333  115497 kubeadm.go:322] 
	I1206 20:00:30.838409  115497 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token gsw79m.btql0t11yc11efah \
	I1206 20:00:30.838522  115497 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3173d0205a58c67077c3594ee458dc14bc41fcece32682cbfd9ea1126e12b817 \
	I1206 20:00:30.838559  115497 kubeadm.go:322] 	--control-plane 
	I1206 20:00:30.838568  115497 kubeadm.go:322] 
	I1206 20:00:30.838686  115497 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1206 20:00:30.838699  115497 kubeadm.go:322] 
	I1206 20:00:30.838805  115497 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token gsw79m.btql0t11yc11efah \
	I1206 20:00:30.838952  115497 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3173d0205a58c67077c3594ee458dc14bc41fcece32682cbfd9ea1126e12b817 
	I1206 20:00:30.839686  115497 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 20:00:30.839714  115497 cni.go:84] Creating CNI manager for ""
	I1206 20:00:30.839727  115497 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 20:00:30.841824  115497 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1206 20:00:27.540848  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:30.038457  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:30.843246  115497 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1206 20:00:30.916583  115497 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1206 20:00:30.974088  115497 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 20:00:30.974183  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:30.974183  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=31a3600ce72029d920a55140bbc6d0705e357503 minikube.k8s.io/name=default-k8s-diff-port-380424 minikube.k8s.io/updated_at=2023_12_06T20_00_30_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:31.400910  115497 ops.go:34] apiserver oom_adj: -16
	I1206 20:00:31.401056  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:31.320362  115217 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (6.630947418s)
	I1206 20:00:31.320445  115217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 20:00:31.349765  115217 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 20:00:31.369412  115217 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 20:00:31.381350  115217 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 20:00:31.381410  115217 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1206 20:00:31.626397  115217 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 20:00:32.039425  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:34.041934  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:31.516285  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:32.139221  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:32.639059  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:33.139995  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:33.639038  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:34.139842  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:34.640037  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:35.139893  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:35.639961  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:36.139749  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:38.383787  115591 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (15.113041618s)
	I1206 20:00:38.383859  115591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 20:00:38.397718  115591 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 20:00:38.406748  115591 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 20:00:38.415574  115591 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 20:00:38.415633  115591 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1206 20:00:38.485595  115591 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1206 20:00:38.485781  115591 kubeadm.go:322] [preflight] Running pre-flight checks
	I1206 20:00:38.659892  115591 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 20:00:38.660073  115591 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 20:00:38.660209  115591 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1206 20:00:38.939756  115591 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 20:00:38.941971  115591 out.go:204]   - Generating certificates and keys ...
	I1206 20:00:38.942103  115591 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1206 20:00:38.942200  115591 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1206 20:00:38.942296  115591 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 20:00:38.942708  115591 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1206 20:00:38.943817  115591 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 20:00:38.944130  115591 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1206 20:00:38.944894  115591 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1206 20:00:38.945607  115591 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1206 20:00:38.946355  115591 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 20:00:38.947015  115591 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 20:00:38.947720  115591 kubeadm.go:322] [certs] Using the existing "sa" key
	I1206 20:00:38.947795  115591 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 20:00:39.140045  115591 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 20:00:39.300047  115591 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 20:00:39.418439  115591 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 20:00:40.060938  115591 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 20:00:40.061616  115591 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 20:00:40.064208  115591 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 20:00:36.042049  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:38.540429  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:36.639372  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:37.139213  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:37.639506  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:38.139159  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:38.639007  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:39.139972  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:39.639969  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:40.139910  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:40.639836  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:41.139009  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:41.639153  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:42.139055  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:42.639853  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:43.139934  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:43.639741  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:44.139776  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:44.279581  115497 kubeadm.go:1088] duration metric: took 13.305461955s to wait for elevateKubeSystemPrivileges.
	I1206 20:00:44.279625  115497 kubeadm.go:406] StartCluster complete in 5m13.04588426s
	I1206 20:00:44.279660  115497 settings.go:142] acquiring lock: {Name:mkfeb988d43ca5824ac2b3af603600358ae0dd6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 20:00:44.279765  115497 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17740-63652/kubeconfig
	I1206 20:00:44.282748  115497 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-63652/kubeconfig: {Name:mkb891a2b2c86b4a1b0f4bb8fd4e51233eb9c683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 20:00:44.285263  115497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 20:00:44.285351  115497 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1206 20:00:44.285434  115497 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-380424"
	I1206 20:00:44.285459  115497 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-380424"
	W1206 20:00:44.285471  115497 addons.go:240] addon storage-provisioner should already be in state true
	I1206 20:00:44.285478  115497 config.go:182] Loaded profile config "default-k8s-diff-port-380424": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 20:00:44.285531  115497 host.go:66] Checking if "default-k8s-diff-port-380424" exists ...
	I1206 20:00:44.285542  115497 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-380424"
	I1206 20:00:44.285561  115497 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-380424"
	I1206 20:00:44.285719  115497 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-380424"
	I1206 20:00:44.285738  115497 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-380424"
	W1206 20:00:44.285747  115497 addons.go:240] addon metrics-server should already be in state true
	I1206 20:00:44.285797  115497 host.go:66] Checking if "default-k8s-diff-port-380424" exists ...
	I1206 20:00:44.285998  115497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:00:44.285998  115497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:00:44.286023  115497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:00:44.286026  115497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:00:44.286167  115497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:00:44.286190  115497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:00:44.306223  115497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41495
	I1206 20:00:44.306441  115497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39661
	I1206 20:00:44.307505  115497 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:00:44.307637  115497 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:00:44.308463  115497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41881
	I1206 20:00:44.308651  115497 main.go:141] libmachine: Using API Version  1
	I1206 20:00:44.308672  115497 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:00:44.309154  115497 main.go:141] libmachine: Using API Version  1
	I1206 20:00:44.309173  115497 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:00:44.309295  115497 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:00:44.309539  115497 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:00:44.310150  115497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:00:44.310183  115497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:00:44.310431  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetState
	I1206 20:00:44.312432  115497 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:00:44.313004  115497 main.go:141] libmachine: Using API Version  1
	I1206 20:00:44.313020  115497 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:00:44.315047  115497 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-380424"
	W1206 20:00:44.315065  115497 addons.go:240] addon default-storageclass should already be in state true
	I1206 20:00:44.315094  115497 host.go:66] Checking if "default-k8s-diff-port-380424" exists ...
	I1206 20:00:44.315499  115497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:00:44.315523  115497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:00:44.316248  115497 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:00:44.316893  115497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:00:44.316920  115497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:00:44.335555  115497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43199
	I1206 20:00:44.335908  115497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45127
	I1206 20:00:44.336636  115497 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:00:44.336749  115497 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:00:44.337379  115497 main.go:141] libmachine: Using API Version  1
	I1206 20:00:44.337404  115497 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:00:44.337791  115497 main.go:141] libmachine: Using API Version  1
	I1206 20:00:44.337818  115497 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:00:44.337895  115497 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:00:44.338474  115497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:00:44.338502  115497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:00:44.338944  115497 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-380424" context rescaled to 1 replicas
	I1206 20:00:44.338979  115497 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.22 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 20:00:44.340731  115497 out.go:177] * Verifying Kubernetes components...
	I1206 20:00:44.339696  115497 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:00:44.342367  115497 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 20:00:44.342537  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetState
	I1206 20:00:44.348774  115497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35461
	I1206 20:00:44.348808  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .DriverName
	I1206 20:00:44.350935  115497 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1206 20:00:44.349433  115497 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:00:44.353022  115497 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1206 20:00:44.353036  115497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1206 20:00:44.353060  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHHostname
	I1206 20:00:44.353493  115497 main.go:141] libmachine: Using API Version  1
	I1206 20:00:44.353512  115497 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:00:44.354850  115497 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:00:44.355732  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetState
	I1206 20:00:44.356894  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 20:00:44.359438  115497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38795
	I1206 20:00:44.360009  115497 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:00:44.360502  115497 main.go:141] libmachine: Using API Version  1
	I1206 20:00:44.360525  115497 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:00:44.360899  115497 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:00:44.361092  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetState
	I1206 20:00:44.362575  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 20:00:44.362605  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 20:00:44.362663  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHPort
	I1206 20:00:44.363067  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 20:00:44.363259  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHUsername
	I1206 20:00:44.363310  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .DriverName
	I1206 20:00:44.363544  115497 sshutil.go:53] new ssh client: &{IP:192.168.72.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/default-k8s-diff-port-380424/id_rsa Username:docker}
	I1206 20:00:44.363628  115497 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 20:00:44.363643  115497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 20:00:44.363663  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHHostname
	I1206 20:00:44.365352  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .DriverName
	I1206 20:00:44.367261  115497 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 20:00:40.066048  115591 out.go:204]   - Booting up control plane ...
	I1206 20:00:40.066207  115591 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 20:00:40.066320  115591 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 20:00:40.069077  115591 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 20:00:40.086558  115591 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 20:00:40.087856  115591 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 20:00:40.087969  115591 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1206 20:00:40.224157  115591 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1206 20:00:45.313051  115217 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1206 20:00:45.313125  115217 kubeadm.go:322] [preflight] Running pre-flight checks
	I1206 20:00:45.313226  115217 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 20:00:45.313355  115217 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 20:00:45.313466  115217 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1206 20:00:45.313591  115217 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 20:00:45.313697  115217 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 20:00:45.313767  115217 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1206 20:00:45.313844  115217 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 20:00:45.315759  115217 out.go:204]   - Generating certificates and keys ...
	I1206 20:00:45.315876  115217 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1206 20:00:45.315980  115217 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1206 20:00:45.316085  115217 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 20:00:45.316158  115217 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1206 20:00:45.316252  115217 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 20:00:45.316320  115217 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1206 20:00:45.316420  115217 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1206 20:00:45.316505  115217 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1206 20:00:45.316608  115217 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 20:00:45.316707  115217 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 20:00:45.316761  115217 kubeadm.go:322] [certs] Using the existing "sa" key
	I1206 20:00:45.316838  115217 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 20:00:45.316909  115217 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 20:00:45.316982  115217 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 20:00:45.317068  115217 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 20:00:45.317136  115217 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 20:00:45.317221  115217 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 20:00:45.318915  115217 out.go:204]   - Booting up control plane ...
	I1206 20:00:45.319042  115217 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 20:00:45.319145  115217 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 20:00:45.319253  115217 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 20:00:45.319367  115217 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 20:00:45.319568  115217 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1206 20:00:45.319690  115217 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.504419 seconds
	I1206 20:00:45.319828  115217 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 20:00:45.319978  115217 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 20:00:45.320042  115217 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 20:00:45.320189  115217 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-448851 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1206 20:00:45.320255  115217 kubeadm.go:322] [bootstrap-token] Using token: ms33mw.f0m2wm1rokle0nnu
	I1206 20:00:45.321976  115217 out.go:204]   - Configuring RBAC rules ...
	I1206 20:00:45.322105  115217 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 20:00:45.322229  115217 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 20:00:45.322373  115217 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 20:00:45.322532  115217 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 20:00:45.322673  115217 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 20:00:45.322759  115217 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1206 20:00:45.322845  115217 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1206 20:00:45.322857  115217 kubeadm.go:322] 
	I1206 20:00:45.322936  115217 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1206 20:00:45.322945  115217 kubeadm.go:322] 
	I1206 20:00:45.323055  115217 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1206 20:00:45.323071  115217 kubeadm.go:322] 
	I1206 20:00:45.323105  115217 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1206 20:00:45.323196  115217 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 20:00:45.323270  115217 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 20:00:45.323282  115217 kubeadm.go:322] 
	I1206 20:00:45.323373  115217 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1206 20:00:45.323477  115217 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 20:00:45.323575  115217 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 20:00:45.323590  115217 kubeadm.go:322] 
	I1206 20:00:45.323736  115217 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1206 20:00:45.323840  115217 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1206 20:00:45.323855  115217 kubeadm.go:322] 
	I1206 20:00:45.323984  115217 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ms33mw.f0m2wm1rokle0nnu \
	I1206 20:00:45.324187  115217 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:3173d0205a58c67077c3594ee458dc14bc41fcece32682cbfd9ea1126e12b817 \
	I1206 20:00:45.324248  115217 kubeadm.go:322]     --control-plane 	  
	I1206 20:00:45.324266  115217 kubeadm.go:322] 
	I1206 20:00:45.324386  115217 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1206 20:00:45.324397  115217 kubeadm.go:322] 
	I1206 20:00:45.324501  115217 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ms33mw.f0m2wm1rokle0nnu \
	I1206 20:00:45.324651  115217 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:3173d0205a58c67077c3594ee458dc14bc41fcece32682cbfd9ea1126e12b817 
	I1206 20:00:45.324664  115217 cni.go:84] Creating CNI manager for ""
	I1206 20:00:45.324675  115217 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 20:00:45.327284  115217 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1206 20:00:41.039495  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:43.041892  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:45.042744  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:44.369437  115497 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 20:00:44.369449  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 20:00:44.369458  115497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 20:00:44.369482  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHHostname
	I1206 20:00:44.373360  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 20:00:44.373394  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 20:00:44.373415  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 20:00:44.373465  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHPort
	I1206 20:00:44.373538  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 20:00:44.373554  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 20:00:44.373769  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 20:00:44.373830  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHPort
	I1206 20:00:44.374020  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 20:00:44.374077  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHUsername
	I1206 20:00:44.374221  115497 sshutil.go:53] new ssh client: &{IP:192.168.72.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/default-k8s-diff-port-380424/id_rsa Username:docker}
	I1206 20:00:44.374800  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHUsername
	I1206 20:00:44.375017  115497 sshutil.go:53] new ssh client: &{IP:192.168.72.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/default-k8s-diff-port-380424/id_rsa Username:docker}
	I1206 20:00:44.528574  115497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 20:00:44.553349  115497 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1206 20:00:44.553382  115497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1206 20:00:44.604100  115497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 20:00:44.605360  115497 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-380424" to be "Ready" ...
	I1206 20:00:44.605799  115497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 20:00:44.610007  115497 node_ready.go:49] node "default-k8s-diff-port-380424" has status "Ready":"True"
	I1206 20:00:44.610039  115497 node_ready.go:38] duration metric: took 4.647914ms waiting for node "default-k8s-diff-port-380424" to be "Ready" ...
	I1206 20:00:44.610052  115497 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 20:00:44.622684  115497 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-x6p7t" in "kube-system" namespace to be "Ready" ...
	I1206 20:00:44.639914  115497 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1206 20:00:44.640005  115497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1206 20:00:44.710284  115497 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 20:00:44.710318  115497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1206 20:00:44.767014  115497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 20:00:46.656182  115497 pod_ready.go:102] pod "coredns-5dd5756b68-x6p7t" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:46.941717  115497 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.413097049s)
	I1206 20:00:46.941764  115497 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.33594011s)
	I1206 20:00:46.941787  115497 start.go:929] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1206 20:00:46.941793  115497 main.go:141] libmachine: Making call to close driver server
	I1206 20:00:46.941733  115497 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.337595925s)
	I1206 20:00:46.941808  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .Close
	I1206 20:00:46.941841  115497 main.go:141] libmachine: Making call to close driver server
	I1206 20:00:46.941863  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .Close
	I1206 20:00:46.942167  115497 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:00:46.942187  115497 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:00:46.942198  115497 main.go:141] libmachine: Making call to close driver server
	I1206 20:00:46.942207  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .Close
	I1206 20:00:46.943997  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | Closing plugin on server side
	I1206 20:00:46.944031  115497 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:00:46.944041  115497 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:00:46.944052  115497 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:00:46.944060  115497 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:00:46.944057  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | Closing plugin on server side
	I1206 20:00:46.944077  115497 main.go:141] libmachine: Making call to close driver server
	I1206 20:00:46.944088  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .Close
	I1206 20:00:46.944363  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | Closing plugin on server side
	I1206 20:00:46.944401  115497 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:00:46.944419  115497 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:00:46.984172  115497 main.go:141] libmachine: Making call to close driver server
	I1206 20:00:46.984206  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .Close
	I1206 20:00:46.984675  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | Closing plugin on server side
	I1206 20:00:46.984714  115497 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:00:46.984733  115497 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:00:47.345448  115497 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.5783821s)
	I1206 20:00:47.345552  115497 main.go:141] libmachine: Making call to close driver server
	I1206 20:00:47.345573  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .Close
	I1206 20:00:47.345987  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | Closing plugin on server side
	I1206 20:00:47.346033  115497 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:00:47.346046  115497 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:00:47.346056  115497 main.go:141] libmachine: Making call to close driver server
	I1206 20:00:47.346088  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .Close
	I1206 20:00:47.346359  115497 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:00:47.346380  115497 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:00:47.346392  115497 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-380424"
	I1206 20:00:47.346442  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | Closing plugin on server side
	I1206 20:00:47.348281  115497 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1206 20:00:45.328763  115217 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1206 20:00:45.342986  115217 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1206 20:00:45.373351  115217 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 20:00:45.373503  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:45.373559  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=31a3600ce72029d920a55140bbc6d0705e357503 minikube.k8s.io/name=old-k8s-version-448851 minikube.k8s.io/updated_at=2023_12_06T20_00_45_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:45.701779  115217 ops.go:34] apiserver oom_adj: -16
	I1206 20:00:45.701907  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:45.815705  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:46.445065  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:46.945361  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:47.444737  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:47.945540  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:49.228883  115591 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.004688 seconds
	I1206 20:00:49.229058  115591 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 20:00:49.258512  115591 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 20:00:49.793797  115591 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 20:00:49.794010  115591 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-209025 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1206 20:00:50.315415  115591 kubeadm.go:322] [bootstrap-token] Using token: j4xv0f.htia0y0wrnbqnji6
	I1206 20:00:47.349693  115497 addons.go:502] enable addons completed in 3.064343142s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1206 20:00:48.648085  115497 pod_ready.go:92] pod "coredns-5dd5756b68-x6p7t" in "kube-system" namespace has status "Ready":"True"
	I1206 20:00:48.648116  115497 pod_ready.go:81] duration metric: took 4.025396521s waiting for pod "coredns-5dd5756b68-x6p7t" in "kube-system" namespace to be "Ready" ...
	I1206 20:00:48.648132  115497 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 20:00:48.660202  115497 pod_ready.go:92] pod "etcd-default-k8s-diff-port-380424" in "kube-system" namespace has status "Ready":"True"
	I1206 20:00:48.660235  115497 pod_ready.go:81] duration metric: took 12.09317ms waiting for pod "etcd-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 20:00:48.660248  115497 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 20:00:48.666568  115497 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-380424" in "kube-system" namespace has status "Ready":"True"
	I1206 20:00:48.666666  115497 pod_ready.go:81] duration metric: took 6.407781ms waiting for pod "kube-apiserver-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 20:00:48.666694  115497 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 20:00:48.679566  115497 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-380424" in "kube-system" namespace has status "Ready":"True"
	I1206 20:00:48.679653  115497 pod_ready.go:81] duration metric: took 12.938485ms waiting for pod "kube-controller-manager-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 20:00:48.679675  115497 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-khh5n" in "kube-system" namespace to be "Ready" ...
	I1206 20:00:49.554241  115497 pod_ready.go:92] pod "kube-proxy-khh5n" in "kube-system" namespace has status "Ready":"True"
	I1206 20:00:49.554266  115497 pod_ready.go:81] duration metric: took 874.584613ms waiting for pod "kube-proxy-khh5n" in "kube-system" namespace to be "Ready" ...
	I1206 20:00:49.554275  115497 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 20:00:49.845110  115497 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-380424" in "kube-system" namespace has status "Ready":"True"
	I1206 20:00:49.845140  115497 pod_ready.go:81] duration metric: took 290.857787ms waiting for pod "kube-scheduler-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 20:00:49.845152  115497 pod_ready.go:38] duration metric: took 5.235087469s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 20:00:49.845172  115497 api_server.go:52] waiting for apiserver process to appear ...
	I1206 20:00:49.845251  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 20:00:49.861908  115497 api_server.go:72] duration metric: took 5.522870891s to wait for apiserver process to appear ...
	I1206 20:00:49.861943  115497 api_server.go:88] waiting for apiserver healthz status ...
	I1206 20:00:49.861965  115497 api_server.go:253] Checking apiserver healthz at https://192.168.72.22:8444/healthz ...
	I1206 20:00:49.868675  115497 api_server.go:279] https://192.168.72.22:8444/healthz returned 200:
	ok
	I1206 20:00:49.870214  115497 api_server.go:141] control plane version: v1.28.4
	I1206 20:00:49.870254  115497 api_server.go:131] duration metric: took 8.303187ms to wait for apiserver health ...
	I1206 20:00:49.870266  115497 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 20:00:50.047974  115497 system_pods.go:59] 8 kube-system pods found
	I1206 20:00:50.048004  115497 system_pods.go:61] "coredns-5dd5756b68-x6p7t" [de75d299-fede-4fe1-a748-31720acc76eb] Running
	I1206 20:00:50.048011  115497 system_pods.go:61] "etcd-default-k8s-diff-port-380424" [36170db0-a926-4c8d-8283-9af453167ee1] Running
	I1206 20:00:50.048018  115497 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-380424" [72412f12-9e20-4905-89ad-65c67a2e5a7b] Running
	I1206 20:00:50.048025  115497 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-380424" [04d32349-9a28-4270-bd15-2275e74b6713] Running
	I1206 20:00:50.048030  115497 system_pods.go:61] "kube-proxy-khh5n" [acac843d-9849-4bda-af66-2422b319665e] Running
	I1206 20:00:50.048036  115497 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-380424" [a5b9f2ed-8cb1-4912-af86-d231d9b275ba] Running
	I1206 20:00:50.048045  115497 system_pods.go:61] "metrics-server-57f55c9bc5-xpbtp" [280fb2bc-d8d8-4684-8be1-ec0ace47ef77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:00:50.048052  115497 system_pods.go:61] "storage-provisioner" [e1def8b1-c6bb-48df-b2f2-34867a409cb7] Running
	I1206 20:00:50.048063  115497 system_pods.go:74] duration metric: took 177.789423ms to wait for pod list to return data ...
	I1206 20:00:50.048073  115497 default_sa.go:34] waiting for default service account to be created ...
	I1206 20:00:50.246867  115497 default_sa.go:45] found service account: "default"
	I1206 20:00:50.246903  115497 default_sa.go:55] duration metric: took 198.823117ms for default service account to be created ...
	I1206 20:00:50.246914  115497 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 20:00:50.447688  115497 system_pods.go:86] 8 kube-system pods found
	I1206 20:00:50.447777  115497 system_pods.go:89] "coredns-5dd5756b68-x6p7t" [de75d299-fede-4fe1-a748-31720acc76eb] Running
	I1206 20:00:50.447798  115497 system_pods.go:89] "etcd-default-k8s-diff-port-380424" [36170db0-a926-4c8d-8283-9af453167ee1] Running
	I1206 20:00:50.447815  115497 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-380424" [72412f12-9e20-4905-89ad-65c67a2e5a7b] Running
	I1206 20:00:50.447846  115497 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-380424" [04d32349-9a28-4270-bd15-2275e74b6713] Running
	I1206 20:00:50.447870  115497 system_pods.go:89] "kube-proxy-khh5n" [acac843d-9849-4bda-af66-2422b319665e] Running
	I1206 20:00:50.447886  115497 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-380424" [a5b9f2ed-8cb1-4912-af86-d231d9b275ba] Running
	I1206 20:00:50.447904  115497 system_pods.go:89] "metrics-server-57f55c9bc5-xpbtp" [280fb2bc-d8d8-4684-8be1-ec0ace47ef77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:00:50.447920  115497 system_pods.go:89] "storage-provisioner" [e1def8b1-c6bb-48df-b2f2-34867a409cb7] Running
	I1206 20:00:50.447953  115497 system_pods.go:126] duration metric: took 201.030369ms to wait for k8s-apps to be running ...
	I1206 20:00:50.447978  115497 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 20:00:50.448057  115497 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 20:00:50.468801  115497 system_svc.go:56] duration metric: took 20.810606ms WaitForService to wait for kubelet.
	I1206 20:00:50.468837  115497 kubeadm.go:581] duration metric: took 6.129827661s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1206 20:00:50.468860  115497 node_conditions.go:102] verifying NodePressure condition ...
	I1206 20:00:50.646083  115497 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1206 20:00:50.646124  115497 node_conditions.go:123] node cpu capacity is 2
	I1206 20:00:50.646138  115497 node_conditions.go:105] duration metric: took 177.272089ms to run NodePressure ...
	I1206 20:00:50.646153  115497 start.go:228] waiting for startup goroutines ...
	I1206 20:00:50.646164  115497 start.go:233] waiting for cluster config update ...
	I1206 20:00:50.646184  115497 start.go:242] writing updated cluster config ...
	I1206 20:00:50.646551  115497 ssh_runner.go:195] Run: rm -f paused
	I1206 20:00:50.711246  115497 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1206 20:00:50.713989  115497 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-380424" cluster and "default" namespace by default
	I1206 20:00:50.317018  115591 out.go:204]   - Configuring RBAC rules ...
	I1206 20:00:50.317155  115591 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 20:00:50.325410  115591 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1206 20:00:50.335197  115591 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 20:00:50.339351  115591 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 20:00:50.343930  115591 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 20:00:50.352323  115591 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 20:00:50.375514  115591 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1206 20:00:50.703397  115591 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1206 20:00:50.753323  115591 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1206 20:00:50.753351  115591 kubeadm.go:322] 
	I1206 20:00:50.753419  115591 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1206 20:00:50.753430  115591 kubeadm.go:322] 
	I1206 20:00:50.753522  115591 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1206 20:00:50.753539  115591 kubeadm.go:322] 
	I1206 20:00:50.753570  115591 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1206 20:00:50.753642  115591 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 20:00:50.753706  115591 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 20:00:50.753717  115591 kubeadm.go:322] 
	I1206 20:00:50.753780  115591 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1206 20:00:50.753790  115591 kubeadm.go:322] 
	I1206 20:00:50.753847  115591 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1206 20:00:50.753862  115591 kubeadm.go:322] 
	I1206 20:00:50.753928  115591 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1206 20:00:50.754020  115591 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 20:00:50.754109  115591 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 20:00:50.754120  115591 kubeadm.go:322] 
	I1206 20:00:50.754221  115591 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1206 20:00:50.754317  115591 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1206 20:00:50.754327  115591 kubeadm.go:322] 
	I1206 20:00:50.754426  115591 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token j4xv0f.htia0y0wrnbqnji6 \
	I1206 20:00:50.754552  115591 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3173d0205a58c67077c3594ee458dc14bc41fcece32682cbfd9ea1126e12b817 \
	I1206 20:00:50.754583  115591 kubeadm.go:322] 	--control-plane 
	I1206 20:00:50.754593  115591 kubeadm.go:322] 
	I1206 20:00:50.754690  115591 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1206 20:00:50.754707  115591 kubeadm.go:322] 
	I1206 20:00:50.754802  115591 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token j4xv0f.htia0y0wrnbqnji6 \
	I1206 20:00:50.754931  115591 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3173d0205a58c67077c3594ee458dc14bc41fcece32682cbfd9ea1126e12b817 
	I1206 20:00:50.755776  115591 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 20:00:50.755809  115591 cni.go:84] Creating CNI manager for ""
	I1206 20:00:50.755820  115591 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 20:00:50.759045  115591 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1206 20:00:47.539932  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:50.039553  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:48.445172  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:48.944908  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:49.445418  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:49.944612  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:50.445278  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:50.944545  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:51.444775  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:51.945470  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:52.445365  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:52.944742  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:50.760722  115591 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1206 20:00:50.792095  115591 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1206 20:00:50.854264  115591 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 20:00:50.854443  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:50.854549  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=31a3600ce72029d920a55140bbc6d0705e357503 minikube.k8s.io/name=embed-certs-209025 minikube.k8s.io/updated_at=2023_12_06T20_00_50_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:50.894717  115591 ops.go:34] apiserver oom_adj: -16
	I1206 20:00:51.388829  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:51.515185  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:52.132878  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:52.633171  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:53.132766  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:53.632887  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:54.132824  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:52.044531  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:54.538924  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:53.444641  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:53.945468  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:54.444996  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:54.944687  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:55.444757  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:55.945342  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:56.445585  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:56.945489  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:57.445628  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:57.944895  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:54.632961  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:55.132361  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:55.632305  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:56.132439  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:56.632252  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:57.132956  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:57.633210  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:58.133090  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:58.632198  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:59.133167  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:58.445440  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:58.945554  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:59.445298  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:59.945574  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:01:00.179151  115217 kubeadm.go:1088] duration metric: took 14.805687634s to wait for elevateKubeSystemPrivileges.
	I1206 20:01:00.179185  115217 kubeadm.go:406] StartCluster complete in 5m46.007596294s
	I1206 20:01:00.179204  115217 settings.go:142] acquiring lock: {Name:mkfeb988d43ca5824ac2b3af603600358ae0dd6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 20:01:00.179291  115217 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17740-63652/kubeconfig
	I1206 20:01:00.181490  115217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-63652/kubeconfig: {Name:mkb891a2b2c86b4a1b0f4bb8fd4e51233eb9c683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 20:01:00.181810  115217 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 20:01:00.181933  115217 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1206 20:01:00.182031  115217 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-448851"
	I1206 20:01:00.182063  115217 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-448851"
	W1206 20:01:00.182071  115217 addons.go:240] addon storage-provisioner should already be in state true
	I1206 20:01:00.182126  115217 host.go:66] Checking if "old-k8s-version-448851" exists ...
	I1206 20:01:00.182126  115217 config.go:182] Loaded profile config "old-k8s-version-448851": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1206 20:01:00.182180  115217 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-448851"
	I1206 20:01:00.182198  115217 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-448851"
	I1206 20:01:00.182554  115217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:01:00.182572  115217 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-448851"
	I1206 20:01:00.182581  115217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:01:00.182591  115217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:01:00.182596  115217 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-448851"
	W1206 20:01:00.182606  115217 addons.go:240] addon metrics-server should already be in state true
	I1206 20:01:00.182613  115217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:01:00.182735  115217 host.go:66] Checking if "old-k8s-version-448851" exists ...
	I1206 20:01:00.183101  115217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:01:00.183146  115217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:01:00.201450  115217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38847
	I1206 20:01:00.203683  115217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39291
	I1206 20:01:00.203715  115217 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:01:00.203800  115217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40089
	I1206 20:01:00.204181  115217 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:01:00.204341  115217 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:01:00.204386  115217 main.go:141] libmachine: Using API Version  1
	I1206 20:01:00.204409  115217 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:01:00.204863  115217 main.go:141] libmachine: Using API Version  1
	I1206 20:01:00.204877  115217 main.go:141] libmachine: Using API Version  1
	I1206 20:01:00.204884  115217 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:01:00.204895  115217 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:01:00.204950  115217 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:01:00.205328  115217 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:01:00.205333  115217 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:01:00.205489  115217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:01:00.205520  115217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:01:00.205560  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetState
	I1206 20:01:00.205992  115217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:01:00.206064  115217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:01:00.209487  115217 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-448851"
	W1206 20:01:00.209512  115217 addons.go:240] addon default-storageclass should already be in state true
	I1206 20:01:00.209545  115217 host.go:66] Checking if "old-k8s-version-448851" exists ...
	I1206 20:01:00.209987  115217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:01:00.210033  115217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:01:00.227092  115217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42411
	I1206 20:01:00.227961  115217 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:01:00.228610  115217 main.go:141] libmachine: Using API Version  1
	I1206 20:01:00.228633  115217 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:01:00.229107  115217 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:01:00.229342  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetState
	I1206 20:01:00.230638  115217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42917
	I1206 20:01:00.231552  115217 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:01:00.231863  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .DriverName
	I1206 20:01:00.235076  115217 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 20:01:00.232196  115217 main.go:141] libmachine: Using API Version  1
	I1206 20:01:00.232926  115217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44245
	I1206 20:01:00.237258  115217 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:01:00.237284  115217 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 20:01:00.237310  115217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 20:01:00.237333  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHHostname
	I1206 20:01:00.237682  115217 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:01:00.238034  115217 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:01:00.238212  115217 main.go:141] libmachine: Using API Version  1
	I1206 20:01:00.238240  115217 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:01:00.238580  115217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:01:00.238612  115217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:01:00.238977  115217 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:01:00.239198  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetState
	I1206 20:01:00.240631  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .DriverName
	I1206 20:01:00.243107  115217 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1206 20:01:00.241155  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 20:01:00.241833  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHPort
	I1206 20:01:00.245218  115217 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1206 20:01:00.245244  115217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1206 20:01:00.245267  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHHostname
	I1206 20:01:00.245315  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 20:01:00.245333  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 20:01:00.245505  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 20:01:00.245639  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHUsername
	I1206 20:01:00.245737  115217 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/old-k8s-version-448851/id_rsa Username:docker}
	I1206 20:01:00.248492  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 20:01:00.249278  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 20:01:00.249313  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 20:01:00.249597  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHPort
	I1206 20:01:00.249811  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 20:01:00.249971  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHUsername
	I1206 20:01:00.250090  115217 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/old-k8s-version-448851/id_rsa Username:docker}
	I1206 20:01:00.259179  115217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41691
	I1206 20:01:00.259617  115217 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:01:00.260068  115217 main.go:141] libmachine: Using API Version  1
	I1206 20:01:00.260090  115217 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:01:00.260461  115217 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:01:00.260685  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetState
	I1206 20:01:00.262284  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .DriverName
	I1206 20:01:00.262586  115217 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 20:01:00.262604  115217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 20:01:00.262623  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHHostname
	I1206 20:01:00.265183  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 20:01:00.265643  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 20:01:00.265661  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 20:01:00.265890  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHPort
	I1206 20:01:00.266078  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 20:01:00.266240  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHUsername
	I1206 20:01:00.266941  115217 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/old-k8s-version-448851/id_rsa Username:docker}
	I1206 20:01:00.271403  115217 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-448851" context rescaled to 1 replicas
	I1206 20:01:00.271435  115217 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.33 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 20:01:00.273197  115217 out.go:177] * Verifying Kubernetes components...
	I1206 20:00:57.039307  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:59.039639  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:01:00.274454  115217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 20:01:00.597204  115217 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1206 20:01:00.597240  115217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1206 20:01:00.621632  115217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 20:01:00.623444  115217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 20:01:00.630185  115217 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-448851" to be "Ready" ...
	I1206 20:01:00.630280  115217 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 20:01:00.633576  115217 node_ready.go:49] node "old-k8s-version-448851" has status "Ready":"True"
	I1206 20:01:00.633603  115217 node_ready.go:38] duration metric: took 3.385927ms waiting for node "old-k8s-version-448851" to be "Ready" ...
	I1206 20:01:00.633616  115217 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 20:01:00.717216  115217 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1206 20:01:00.717273  115217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1206 20:01:00.735998  115217 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-2nncf" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:00.866186  115217 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 20:01:00.866218  115217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1206 20:01:01.066040  115217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 20:01:01.835164  115217 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.213479825s)
	I1206 20:01:01.835230  115217 main.go:141] libmachine: Making call to close driver server
	I1206 20:01:01.835243  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .Close
	I1206 20:01:01.835558  115217 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:01:01.835605  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | Closing plugin on server side
	I1206 20:01:01.835615  115217 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:01:01.835648  115217 main.go:141] libmachine: Making call to close driver server
	I1206 20:01:01.835663  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .Close
	I1206 20:01:01.835939  115217 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:01:01.835974  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | Closing plugin on server side
	I1206 20:01:01.835983  115217 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:01:01.872799  115217 main.go:141] libmachine: Making call to close driver server
	I1206 20:01:01.872835  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .Close
	I1206 20:01:01.873282  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | Closing plugin on server side
	I1206 20:01:01.873317  115217 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:01:01.873336  115217 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:01:02.258697  115217 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.635202106s)
	I1206 20:01:02.258754  115217 main.go:141] libmachine: Making call to close driver server
	I1206 20:01:02.258769  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .Close
	I1206 20:01:02.258773  115217 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.628450705s)
	I1206 20:01:02.258806  115217 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1206 20:01:02.259113  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | Closing plugin on server side
	I1206 20:01:02.260973  115217 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:01:02.261002  115217 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:01:02.261014  115217 main.go:141] libmachine: Making call to close driver server
	I1206 20:01:02.261025  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .Close
	I1206 20:01:02.261416  115217 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:01:02.261440  115217 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:01:02.261424  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | Closing plugin on server side
	I1206 20:01:02.375593  115217 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.309500554s)
	I1206 20:01:02.375659  115217 main.go:141] libmachine: Making call to close driver server
	I1206 20:01:02.375680  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .Close
	I1206 20:01:02.376064  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | Closing plugin on server side
	I1206 20:01:02.376155  115217 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:01:02.376168  115217 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:01:02.376185  115217 main.go:141] libmachine: Making call to close driver server
	I1206 20:01:02.376193  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .Close
	I1206 20:01:02.376522  115217 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:01:02.376532  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | Closing plugin on server side
	I1206 20:01:02.376543  115217 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:01:02.376559  115217 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-448851"
	I1206 20:01:02.378457  115217 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1206 20:01:02.380099  115217 addons.go:502] enable addons completed in 2.198162438s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1206 20:00:59.632971  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:01:00.133124  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:01:00.633148  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:01:01.132260  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:01:01.632323  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:01:02.132575  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:01:02.632268  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:01:03.132789  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:01:03.633155  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:01:04.132754  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:01:04.321130  115591 kubeadm.go:1088] duration metric: took 13.466729355s to wait for elevateKubeSystemPrivileges.
	I1206 20:01:04.321175  115591 kubeadm.go:406] StartCluster complete in 5m10.1110739s
	I1206 20:01:04.321200  115591 settings.go:142] acquiring lock: {Name:mkfeb988d43ca5824ac2b3af603600358ae0dd6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 20:01:04.321311  115591 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17740-63652/kubeconfig
	I1206 20:01:04.324158  115591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-63652/kubeconfig: {Name:mkb891a2b2c86b4a1b0f4bb8fd4e51233eb9c683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 20:01:04.324502  115591 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 20:01:04.324531  115591 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1206 20:01:04.324609  115591 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-209025"
	I1206 20:01:04.324633  115591 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-209025"
	W1206 20:01:04.324640  115591 addons.go:240] addon storage-provisioner should already be in state true
	I1206 20:01:04.324670  115591 addons.go:69] Setting default-storageclass=true in profile "embed-certs-209025"
	I1206 20:01:04.324699  115591 host.go:66] Checking if "embed-certs-209025" exists ...
	I1206 20:01:04.324702  115591 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-209025"
	I1206 20:01:04.324729  115591 config.go:182] Loaded profile config "embed-certs-209025": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 20:01:04.324799  115591 addons.go:69] Setting metrics-server=true in profile "embed-certs-209025"
	I1206 20:01:04.324813  115591 addons.go:231] Setting addon metrics-server=true in "embed-certs-209025"
	W1206 20:01:04.324820  115591 addons.go:240] addon metrics-server should already be in state true
	I1206 20:01:04.324858  115591 host.go:66] Checking if "embed-certs-209025" exists ...
	I1206 20:01:04.325100  115591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:01:04.325126  115591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:01:04.325127  115591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:01:04.325163  115591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:01:04.325191  115591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:01:04.325213  115591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:01:04.344127  115591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37257
	I1206 20:01:04.344361  115591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36921
	I1206 20:01:04.344866  115591 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:01:04.344978  115591 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:01:04.345615  115591 main.go:141] libmachine: Using API Version  1
	I1206 20:01:04.345635  115591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:01:04.345756  115591 main.go:141] libmachine: Using API Version  1
	I1206 20:01:04.345766  115591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:01:04.346201  115591 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:01:04.346772  115591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:01:04.346821  115591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:01:04.347367  115591 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:01:04.347741  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetState
	I1206 20:01:04.348264  115591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40295
	I1206 20:01:04.348754  115591 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:01:04.349655  115591 main.go:141] libmachine: Using API Version  1
	I1206 20:01:04.349676  115591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:01:04.350156  115591 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:01:04.352233  115591 addons.go:231] Setting addon default-storageclass=true in "embed-certs-209025"
	W1206 20:01:04.352257  115591 addons.go:240] addon default-storageclass should already be in state true
	I1206 20:01:04.352286  115591 host.go:66] Checking if "embed-certs-209025" exists ...
	I1206 20:01:04.352700  115591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:01:04.352734  115591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:01:04.353530  115591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:01:04.353563  115591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:01:04.365607  115591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40665
	I1206 20:01:04.366094  115591 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:01:04.366493  115591 main.go:141] libmachine: Using API Version  1
	I1206 20:01:04.366514  115591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:01:04.366780  115591 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:01:04.366908  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetState
	I1206 20:01:04.368611  115591 main.go:141] libmachine: (embed-certs-209025) Calling .DriverName
	I1206 20:01:04.370655  115591 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 20:01:04.372351  115591 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 20:01:04.372372  115591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33729
	I1206 20:01:04.372376  115591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 20:01:04.372402  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHHostname
	I1206 20:01:04.373021  115591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33983
	I1206 20:01:04.374446  115591 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:01:04.375104  115591 main.go:141] libmachine: Using API Version  1
	I1206 20:01:04.375126  115591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:01:04.375570  115591 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:01:04.375769  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetState
	I1206 20:01:04.376448  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 20:01:04.376851  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 20:01:04.376907  115591 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:01:04.377123  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 20:01:04.377377  115591 main.go:141] libmachine: (embed-certs-209025) Calling .DriverName
	I1206 20:01:04.377531  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHPort
	I1206 20:01:04.379514  115591 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1206 20:01:04.377862  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 20:01:04.378152  115591 main.go:141] libmachine: Using API Version  1
	I1206 20:01:04.381562  115591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:01:04.381682  115591 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1206 20:01:04.381700  115591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1206 20:01:04.381722  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHHostname
	I1206 20:01:04.382619  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHUsername
	I1206 20:01:04.382788  115591 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/embed-certs-209025/id_rsa Username:docker}
	I1206 20:01:04.383576  115591 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:01:04.384146  115591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:01:04.384176  115591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:01:04.386297  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 20:01:04.386684  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 20:01:04.386734  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 20:01:04.387477  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHPort
	I1206 20:01:04.387726  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 20:01:04.387913  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHUsername
	I1206 20:01:04.388055  115591 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/embed-certs-209025/id_rsa Username:docker}
	I1206 20:01:04.401629  115591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41467
	I1206 20:01:04.402214  115591 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:01:04.402804  115591 main.go:141] libmachine: Using API Version  1
	I1206 20:01:04.402826  115591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:01:04.403127  115591 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:01:04.403337  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetState
	I1206 20:01:04.405059  115591 main.go:141] libmachine: (embed-certs-209025) Calling .DriverName
	I1206 20:01:04.405404  115591 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 20:01:04.405427  115591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 20:01:04.405449  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHHostname
	I1206 20:01:04.408608  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 20:01:04.409145  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 20:01:04.409176  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 20:01:04.409443  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHPort
	I1206 20:01:04.409640  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 20:01:04.409860  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHUsername
	I1206 20:01:04.410016  115591 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/embed-certs-209025/id_rsa Username:docker}
	W1206 20:01:04.462788  115591 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "embed-certs-209025" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E1206 20:01:04.462843  115591 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I1206 20:01:04.462872  115591 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.164 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 20:01:04.464916  115591 out.go:177] * Verifying Kubernetes components...
	I1206 20:01:04.466388  115591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 20:01:01.039870  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:01:03.550944  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:01:05.231905  115078 pod_ready.go:81] duration metric: took 4m0.001038985s waiting for pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace to be "Ready" ...
	E1206 20:01:05.231950  115078 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1206 20:01:05.231962  115078 pod_ready.go:38] duration metric: took 4m4.801417566s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 20:01:05.231988  115078 api_server.go:52] waiting for apiserver process to appear ...
	I1206 20:01:05.232081  115078 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 20:01:05.232155  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 20:01:05.294538  115078 cri.go:89] found id: "f5b4ca951aec7e7912fa233772fa1e5a4cceffad95c297ea5b1b968ce835d2eb"
	I1206 20:01:05.294570  115078 cri.go:89] found id: ""
	I1206 20:01:05.294581  115078 logs.go:284] 1 containers: [f5b4ca951aec7e7912fa233772fa1e5a4cceffad95c297ea5b1b968ce835d2eb]
	I1206 20:01:05.294643  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:05.300221  115078 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 20:01:05.300300  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 20:01:05.359655  115078 cri.go:89] found id: "7633ca5afa8aeb44e4c450285edb3b4ca09bd881a87e959894d7740313a7d861"
	I1206 20:01:05.359685  115078 cri.go:89] found id: ""
	I1206 20:01:05.359696  115078 logs.go:284] 1 containers: [7633ca5afa8aeb44e4c450285edb3b4ca09bd881a87e959894d7740313a7d861]
	I1206 20:01:05.359759  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:05.364518  115078 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 20:01:05.364600  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 20:01:05.408448  115078 cri.go:89] found id: "93aee471c37fc8008cd5a0ce741944f249d9cbf7a5cbb62828369850236b0f07"
	I1206 20:01:05.408490  115078 cri.go:89] found id: ""
	I1206 20:01:05.408510  115078 logs.go:284] 1 containers: [93aee471c37fc8008cd5a0ce741944f249d9cbf7a5cbb62828369850236b0f07]
	I1206 20:01:05.408575  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:05.413345  115078 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 20:01:05.413428  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 20:01:05.462932  115078 cri.go:89] found id: "c00065611a1f7003ff8edd2ac629e3c6bbdfa4e1167b1dfd412ee16e5a9d3dcd"
	I1206 20:01:05.462960  115078 cri.go:89] found id: ""
	I1206 20:01:05.462971  115078 logs.go:284] 1 containers: [c00065611a1f7003ff8edd2ac629e3c6bbdfa4e1167b1dfd412ee16e5a9d3dcd]
	I1206 20:01:05.463034  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:05.468632  115078 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 20:01:05.468713  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 20:01:05.519690  115078 cri.go:89] found id: "0da9ad5d9749cba59bd11e3aa5dd332ad76b5bdd0fcde476ce333d4069f0e259"
	I1206 20:01:05.519720  115078 cri.go:89] found id: ""
	I1206 20:01:05.519731  115078 logs.go:284] 1 containers: [0da9ad5d9749cba59bd11e3aa5dd332ad76b5bdd0fcde476ce333d4069f0e259]
	I1206 20:01:05.519789  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:05.525847  115078 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 20:01:05.525933  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 20:01:05.580475  115078 cri.go:89] found id: "43c8e91cea581258f9bf13a71487b9ffadc02ac982f7ff08fa649092e171fa87"
	I1206 20:01:05.580537  115078 cri.go:89] found id: ""
	I1206 20:01:05.580550  115078 logs.go:284] 1 containers: [43c8e91cea581258f9bf13a71487b9ffadc02ac982f7ff08fa649092e171fa87]
	I1206 20:01:05.580623  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:05.585602  115078 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 20:01:05.585688  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 20:01:05.636350  115078 cri.go:89] found id: ""
	I1206 20:01:05.636383  115078 logs.go:284] 0 containers: []
	W1206 20:01:05.636394  115078 logs.go:286] No container was found matching "kindnet"
	I1206 20:01:05.636403  115078 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 20:01:05.636469  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 20:01:05.678819  115078 cri.go:89] found id: "ec1601a49c79cb7e81473ecbb6e4506b82fc3b4942e91392653a6991579c5617"
	I1206 20:01:05.678846  115078 cri.go:89] found id: "d07b3a050ef1964c57b85cc91a3d17646fef6d2298b91d72c007f432c51942c9"
	I1206 20:01:05.678853  115078 cri.go:89] found id: ""
	I1206 20:01:05.678863  115078 logs.go:284] 2 containers: [ec1601a49c79cb7e81473ecbb6e4506b82fc3b4942e91392653a6991579c5617 d07b3a050ef1964c57b85cc91a3d17646fef6d2298b91d72c007f432c51942c9]
	I1206 20:01:05.678929  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:05.683845  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:05.689989  115078 logs.go:123] Gathering logs for kube-scheduler [c00065611a1f7003ff8edd2ac629e3c6bbdfa4e1167b1dfd412ee16e5a9d3dcd] ...
	I1206 20:01:05.690021  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c00065611a1f7003ff8edd2ac629e3c6bbdfa4e1167b1dfd412ee16e5a9d3dcd"
	I1206 20:01:05.745510  115078 logs.go:123] Gathering logs for CRI-O ...
	I1206 20:01:05.745554  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 20:01:04.580869  115591 node_ready.go:35] waiting up to 6m0s for node "embed-certs-209025" to be "Ready" ...
	I1206 20:01:04.580933  115591 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 20:01:04.585219  115591 node_ready.go:49] node "embed-certs-209025" has status "Ready":"True"
	I1206 20:01:04.585267  115591 node_ready.go:38] duration metric: took 4.363508ms waiting for node "embed-certs-209025" to be "Ready" ...
	I1206 20:01:04.585281  115591 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 20:01:04.595166  115591 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-57z8q" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:04.611829  115591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 20:01:04.622127  115591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 20:01:04.628233  115591 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1206 20:01:04.628260  115591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1206 20:01:04.706473  115591 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1206 20:01:04.706498  115591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1206 20:01:04.790827  115591 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 20:01:04.790868  115591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1206 20:01:04.840367  115591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 20:01:06.312054  115591 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.73108071s)
	I1206 20:01:06.312092  115591 start.go:929] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1206 20:01:06.312099  115591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.700233834s)
	I1206 20:01:06.312147  115591 main.go:141] libmachine: Making call to close driver server
	I1206 20:01:06.312162  115591 main.go:141] libmachine: (embed-certs-209025) Calling .Close
	I1206 20:01:06.312503  115591 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:01:06.312519  115591 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:01:06.312531  115591 main.go:141] libmachine: Making call to close driver server
	I1206 20:01:06.312541  115591 main.go:141] libmachine: (embed-certs-209025) Calling .Close
	I1206 20:01:06.312895  115591 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:01:06.312985  115591 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:01:06.312952  115591 main.go:141] libmachine: (embed-certs-209025) DBG | Closing plugin on server side
	I1206 20:01:06.334314  115591 main.go:141] libmachine: Making call to close driver server
	I1206 20:01:06.334343  115591 main.go:141] libmachine: (embed-certs-209025) Calling .Close
	I1206 20:01:06.334719  115591 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:01:06.334742  115591 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:01:06.677046  115591 pod_ready.go:102] pod "coredns-5dd5756b68-57z8q" in "kube-system" namespace has status "Ready":"False"
	I1206 20:01:07.176051  115591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.553877678s)
	I1206 20:01:07.176112  115591 main.go:141] libmachine: Making call to close driver server
	I1206 20:01:07.176124  115591 main.go:141] libmachine: (embed-certs-209025) Calling .Close
	I1206 20:01:07.176520  115591 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:01:07.176551  115591 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:01:07.176570  115591 main.go:141] libmachine: Making call to close driver server
	I1206 20:01:07.176584  115591 main.go:141] libmachine: (embed-certs-209025) Calling .Close
	I1206 20:01:07.176859  115591 main.go:141] libmachine: (embed-certs-209025) DBG | Closing plugin on server side
	I1206 20:01:07.176852  115591 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:01:07.176884  115591 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:01:07.287377  115591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.446934189s)
	I1206 20:01:07.287525  115591 main.go:141] libmachine: Making call to close driver server
	I1206 20:01:07.287586  115591 main.go:141] libmachine: (embed-certs-209025) Calling .Close
	I1206 20:01:07.288055  115591 main.go:141] libmachine: (embed-certs-209025) DBG | Closing plugin on server side
	I1206 20:01:07.288055  115591 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:01:07.288082  115591 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:01:07.288096  115591 main.go:141] libmachine: Making call to close driver server
	I1206 20:01:07.288105  115591 main.go:141] libmachine: (embed-certs-209025) Calling .Close
	I1206 20:01:07.288358  115591 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:01:07.288372  115591 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:01:07.288384  115591 addons.go:467] Verifying addon metrics-server=true in "embed-certs-209025"
	I1206 20:01:07.291120  115591 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1206 20:01:03.100131  115217 pod_ready.go:102] pod "coredns-5644d7b6d9-2nncf" in "kube-system" namespace has status "Ready":"False"
	I1206 20:01:05.107571  115217 pod_ready.go:102] pod "coredns-5644d7b6d9-2nncf" in "kube-system" namespace has status "Ready":"False"
	I1206 20:01:07.599078  115217 pod_ready.go:102] pod "coredns-5644d7b6d9-2nncf" in "kube-system" namespace has status "Ready":"False"
	I1206 20:01:07.292151  115591 addons.go:502] enable addons completed in 2.967619291s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1206 20:01:09.122709  115591 pod_ready.go:102] pod "coredns-5dd5756b68-57z8q" in "kube-system" namespace has status "Ready":"False"
	I1206 20:01:06.258156  115078 logs.go:123] Gathering logs for container status ...
	I1206 20:01:06.258193  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 20:01:06.321049  115078 logs.go:123] Gathering logs for kubelet ...
	I1206 20:01:06.321084  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 20:01:06.376243  115078 logs.go:123] Gathering logs for etcd [7633ca5afa8aeb44e4c450285edb3b4ca09bd881a87e959894d7740313a7d861] ...
	I1206 20:01:06.376281  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7633ca5afa8aeb44e4c450285edb3b4ca09bd881a87e959894d7740313a7d861"
	I1206 20:01:06.441701  115078 logs.go:123] Gathering logs for coredns [93aee471c37fc8008cd5a0ce741944f249d9cbf7a5cbb62828369850236b0f07] ...
	I1206 20:01:06.441742  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93aee471c37fc8008cd5a0ce741944f249d9cbf7a5cbb62828369850236b0f07"
	I1206 20:01:06.493399  115078 logs.go:123] Gathering logs for kube-proxy [0da9ad5d9749cba59bd11e3aa5dd332ad76b5bdd0fcde476ce333d4069f0e259] ...
	I1206 20:01:06.493440  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0da9ad5d9749cba59bd11e3aa5dd332ad76b5bdd0fcde476ce333d4069f0e259"
	I1206 20:01:06.545681  115078 logs.go:123] Gathering logs for storage-provisioner [d07b3a050ef1964c57b85cc91a3d17646fef6d2298b91d72c007f432c51942c9] ...
	I1206 20:01:06.545717  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d07b3a050ef1964c57b85cc91a3d17646fef6d2298b91d72c007f432c51942c9"
	I1206 20:01:06.602830  115078 logs.go:123] Gathering logs for dmesg ...
	I1206 20:01:06.602864  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 20:01:06.618874  115078 logs.go:123] Gathering logs for kube-controller-manager [43c8e91cea581258f9bf13a71487b9ffadc02ac982f7ff08fa649092e171fa87] ...
	I1206 20:01:06.618903  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 43c8e91cea581258f9bf13a71487b9ffadc02ac982f7ff08fa649092e171fa87"
	I1206 20:01:06.694329  115078 logs.go:123] Gathering logs for storage-provisioner [ec1601a49c79cb7e81473ecbb6e4506b82fc3b4942e91392653a6991579c5617] ...
	I1206 20:01:06.694375  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec1601a49c79cb7e81473ecbb6e4506b82fc3b4942e91392653a6991579c5617"
	I1206 20:01:06.748217  115078 logs.go:123] Gathering logs for describe nodes ...
	I1206 20:01:06.748255  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1206 20:01:06.933616  115078 logs.go:123] Gathering logs for kube-apiserver [f5b4ca951aec7e7912fa233772fa1e5a4cceffad95c297ea5b1b968ce835d2eb] ...
	I1206 20:01:06.933655  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5b4ca951aec7e7912fa233772fa1e5a4cceffad95c297ea5b1b968ce835d2eb"
	I1206 20:01:09.511340  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 20:01:09.530228  115078 api_server.go:72] duration metric: took 4m16.464196787s to wait for apiserver process to appear ...
	I1206 20:01:09.530254  115078 api_server.go:88] waiting for apiserver healthz status ...
	I1206 20:01:09.530295  115078 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 20:01:09.530357  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 20:01:09.574265  115078 cri.go:89] found id: "f5b4ca951aec7e7912fa233772fa1e5a4cceffad95c297ea5b1b968ce835d2eb"
	I1206 20:01:09.574301  115078 cri.go:89] found id: ""
	I1206 20:01:09.574313  115078 logs.go:284] 1 containers: [f5b4ca951aec7e7912fa233772fa1e5a4cceffad95c297ea5b1b968ce835d2eb]
	I1206 20:01:09.574377  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:09.579240  115078 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 20:01:09.579310  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 20:01:09.622512  115078 cri.go:89] found id: "7633ca5afa8aeb44e4c450285edb3b4ca09bd881a87e959894d7740313a7d861"
	I1206 20:01:09.622540  115078 cri.go:89] found id: ""
	I1206 20:01:09.622551  115078 logs.go:284] 1 containers: [7633ca5afa8aeb44e4c450285edb3b4ca09bd881a87e959894d7740313a7d861]
	I1206 20:01:09.622619  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:09.627770  115078 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 20:01:09.627847  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 20:01:09.675976  115078 cri.go:89] found id: "93aee471c37fc8008cd5a0ce741944f249d9cbf7a5cbb62828369850236b0f07"
	I1206 20:01:09.676007  115078 cri.go:89] found id: ""
	I1206 20:01:09.676018  115078 logs.go:284] 1 containers: [93aee471c37fc8008cd5a0ce741944f249d9cbf7a5cbb62828369850236b0f07]
	I1206 20:01:09.676082  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:09.680750  115078 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 20:01:09.680824  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 20:01:09.721081  115078 cri.go:89] found id: "c00065611a1f7003ff8edd2ac629e3c6bbdfa4e1167b1dfd412ee16e5a9d3dcd"
	I1206 20:01:09.721108  115078 cri.go:89] found id: ""
	I1206 20:01:09.721119  115078 logs.go:284] 1 containers: [c00065611a1f7003ff8edd2ac629e3c6bbdfa4e1167b1dfd412ee16e5a9d3dcd]
	I1206 20:01:09.721181  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:09.725501  115078 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 20:01:09.725568  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 20:01:09.777674  115078 cri.go:89] found id: "0da9ad5d9749cba59bd11e3aa5dd332ad76b5bdd0fcde476ce333d4069f0e259"
	I1206 20:01:09.777700  115078 cri.go:89] found id: ""
	I1206 20:01:09.777709  115078 logs.go:284] 1 containers: [0da9ad5d9749cba59bd11e3aa5dd332ad76b5bdd0fcde476ce333d4069f0e259]
	I1206 20:01:09.777767  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:09.782475  115078 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 20:01:09.782558  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 20:01:09.833889  115078 cri.go:89] found id: "43c8e91cea581258f9bf13a71487b9ffadc02ac982f7ff08fa649092e171fa87"
	I1206 20:01:09.833916  115078 cri.go:89] found id: ""
	I1206 20:01:09.833926  115078 logs.go:284] 1 containers: [43c8e91cea581258f9bf13a71487b9ffadc02ac982f7ff08fa649092e171fa87]
	I1206 20:01:09.833985  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:09.838897  115078 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 20:01:09.838977  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 20:01:09.880892  115078 cri.go:89] found id: ""
	I1206 20:01:09.880923  115078 logs.go:284] 0 containers: []
	W1206 20:01:09.880934  115078 logs.go:286] No container was found matching "kindnet"
	I1206 20:01:09.880943  115078 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 20:01:09.881011  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 20:01:09.924025  115078 cri.go:89] found id: "ec1601a49c79cb7e81473ecbb6e4506b82fc3b4942e91392653a6991579c5617"
	I1206 20:01:09.924058  115078 cri.go:89] found id: "d07b3a050ef1964c57b85cc91a3d17646fef6d2298b91d72c007f432c51942c9"
	I1206 20:01:09.924065  115078 cri.go:89] found id: ""
	I1206 20:01:09.924075  115078 logs.go:284] 2 containers: [ec1601a49c79cb7e81473ecbb6e4506b82fc3b4942e91392653a6991579c5617 d07b3a050ef1964c57b85cc91a3d17646fef6d2298b91d72c007f432c51942c9]
	I1206 20:01:09.924142  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:09.928667  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:09.933112  115078 logs.go:123] Gathering logs for dmesg ...
	I1206 20:01:09.933134  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 20:01:09.949212  115078 logs.go:123] Gathering logs for etcd [7633ca5afa8aeb44e4c450285edb3b4ca09bd881a87e959894d7740313a7d861] ...
	I1206 20:01:09.949254  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7633ca5afa8aeb44e4c450285edb3b4ca09bd881a87e959894d7740313a7d861"
	I1206 20:01:09.996227  115078 logs.go:123] Gathering logs for container status ...
	I1206 20:01:09.996261  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 20:01:10.046607  115078 logs.go:123] Gathering logs for kubelet ...
	I1206 20:01:10.046645  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 20:01:10.102171  115078 logs.go:123] Gathering logs for kube-controller-manager [43c8e91cea581258f9bf13a71487b9ffadc02ac982f7ff08fa649092e171fa87] ...
	I1206 20:01:10.102214  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 43c8e91cea581258f9bf13a71487b9ffadc02ac982f7ff08fa649092e171fa87"
	I1206 20:01:10.160600  115078 logs.go:123] Gathering logs for storage-provisioner [ec1601a49c79cb7e81473ecbb6e4506b82fc3b4942e91392653a6991579c5617] ...
	I1206 20:01:10.160641  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec1601a49c79cb7e81473ecbb6e4506b82fc3b4942e91392653a6991579c5617"
	I1206 20:01:10.203673  115078 logs.go:123] Gathering logs for CRI-O ...
	I1206 20:01:10.203709  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 20:01:10.681783  115078 logs.go:123] Gathering logs for describe nodes ...
	I1206 20:01:10.681824  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1206 20:01:10.813061  115078 logs.go:123] Gathering logs for kube-proxy [0da9ad5d9749cba59bd11e3aa5dd332ad76b5bdd0fcde476ce333d4069f0e259] ...
	I1206 20:01:10.813102  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0da9ad5d9749cba59bd11e3aa5dd332ad76b5bdd0fcde476ce333d4069f0e259"
	I1206 20:01:10.857895  115078 logs.go:123] Gathering logs for storage-provisioner [d07b3a050ef1964c57b85cc91a3d17646fef6d2298b91d72c007f432c51942c9] ...
	I1206 20:01:10.857930  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d07b3a050ef1964c57b85cc91a3d17646fef6d2298b91d72c007f432c51942c9"
	I1206 20:01:10.904589  115078 logs.go:123] Gathering logs for kube-apiserver [f5b4ca951aec7e7912fa233772fa1e5a4cceffad95c297ea5b1b968ce835d2eb] ...
	I1206 20:01:10.904625  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5b4ca951aec7e7912fa233772fa1e5a4cceffad95c297ea5b1b968ce835d2eb"
	I1206 20:01:10.957570  115078 logs.go:123] Gathering logs for kube-scheduler [c00065611a1f7003ff8edd2ac629e3c6bbdfa4e1167b1dfd412ee16e5a9d3dcd] ...
	I1206 20:01:10.957608  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c00065611a1f7003ff8edd2ac629e3c6bbdfa4e1167b1dfd412ee16e5a9d3dcd"
	I1206 20:01:09.624997  115591 pod_ready.go:92] pod "coredns-5dd5756b68-57z8q" in "kube-system" namespace has status "Ready":"True"
	I1206 20:01:09.625025  115591 pod_ready.go:81] duration metric: took 5.029829059s waiting for pod "coredns-5dd5756b68-57z8q" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:09.625038  115591 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-8lsns" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:09.632534  115591 pod_ready.go:92] pod "coredns-5dd5756b68-8lsns" in "kube-system" namespace has status "Ready":"True"
	I1206 20:01:09.632561  115591 pod_ready.go:81] duration metric: took 7.514952ms waiting for pod "coredns-5dd5756b68-8lsns" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:09.632574  115591 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:09.642077  115591 pod_ready.go:92] pod "etcd-embed-certs-209025" in "kube-system" namespace has status "Ready":"True"
	I1206 20:01:09.642107  115591 pod_ready.go:81] duration metric: took 9.52505ms waiting for pod "etcd-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:09.642121  115591 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:09.648636  115591 pod_ready.go:92] pod "kube-apiserver-embed-certs-209025" in "kube-system" namespace has status "Ready":"True"
	I1206 20:01:09.648658  115591 pod_ready.go:81] duration metric: took 6.530394ms waiting for pod "kube-apiserver-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:09.648667  115591 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:09.656534  115591 pod_ready.go:92] pod "kube-controller-manager-embed-certs-209025" in "kube-system" namespace has status "Ready":"True"
	I1206 20:01:09.656561  115591 pod_ready.go:81] duration metric: took 7.887248ms waiting for pod "kube-controller-manager-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:09.656573  115591 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nf2cw" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:10.019281  115591 pod_ready.go:92] pod "kube-proxy-nf2cw" in "kube-system" namespace has status "Ready":"True"
	I1206 20:01:10.019310  115591 pod_ready.go:81] duration metric: took 362.727602ms waiting for pod "kube-proxy-nf2cw" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:10.019323  115591 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:10.419938  115591 pod_ready.go:92] pod "kube-scheduler-embed-certs-209025" in "kube-system" namespace has status "Ready":"True"
	I1206 20:01:10.419971  115591 pod_ready.go:81] duration metric: took 400.640145ms waiting for pod "kube-scheduler-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:10.419982  115591 pod_ready.go:38] duration metric: took 5.834689614s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 20:01:10.420000  115591 api_server.go:52] waiting for apiserver process to appear ...
	I1206 20:01:10.420062  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 20:01:10.436691  115591 api_server.go:72] duration metric: took 5.973781556s to wait for apiserver process to appear ...
	I1206 20:01:10.436723  115591 api_server.go:88] waiting for apiserver healthz status ...
	I1206 20:01:10.436746  115591 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8443/healthz ...
	I1206 20:01:10.442876  115591 api_server.go:279] https://192.168.50.164:8443/healthz returned 200:
	ok
	I1206 20:01:10.444774  115591 api_server.go:141] control plane version: v1.28.4
	I1206 20:01:10.444798  115591 api_server.go:131] duration metric: took 8.067787ms to wait for apiserver health ...
	I1206 20:01:10.444808  115591 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 20:01:10.624219  115591 system_pods.go:59] 9 kube-system pods found
	I1206 20:01:10.624251  115591 system_pods.go:61] "coredns-5dd5756b68-57z8q" [24c81a49-d80e-47df-86d2-0056ccc25858] Running
	I1206 20:01:10.624256  115591 system_pods.go:61] "coredns-5dd5756b68-8lsns" [14c5f16e-0c30-4602-b772-c6e0c8a577a8] Running
	I1206 20:01:10.624260  115591 system_pods.go:61] "etcd-embed-certs-209025" [e352dba2-c22b-4b21-9cb7-d641d29307a0] Running
	I1206 20:01:10.624264  115591 system_pods.go:61] "kube-apiserver-embed-certs-209025" [b4bfe0d1-0f1f-4e5e-96a4-94ec19cc1ab4] Running
	I1206 20:01:10.624268  115591 system_pods.go:61] "kube-controller-manager-embed-certs-209025" [1e9819fc-0187-4410-97f5-a517fb6b6595] Running
	I1206 20:01:10.624272  115591 system_pods.go:61] "kube-proxy-nf2cw" [5e49b3f8-7eee-4c04-ae22-75ccd216bb27] Running
	I1206 20:01:10.624275  115591 system_pods.go:61] "kube-scheduler-embed-certs-209025" [cc5d4d6f-515d-48b9-8d6f-83c33b0fa037] Running
	I1206 20:01:10.624282  115591 system_pods.go:61] "metrics-server-57f55c9bc5-5qxxj" [4eaddb4b-aec0-4cc7-b467-bb882bcba8a0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:10.624286  115591 system_pods.go:61] "storage-provisioner" [2417fc35-04fd-4dcf-9d16-2649a0d3bb3b] Running
	I1206 20:01:10.624296  115591 system_pods.go:74] duration metric: took 179.481721ms to wait for pod list to return data ...
	I1206 20:01:10.624306  115591 default_sa.go:34] waiting for default service account to be created ...
	I1206 20:01:10.818715  115591 default_sa.go:45] found service account: "default"
	I1206 20:01:10.818741  115591 default_sa.go:55] duration metric: took 194.428895ms for default service account to be created ...
	I1206 20:01:10.818750  115591 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 20:01:11.022686  115591 system_pods.go:86] 9 kube-system pods found
	I1206 20:01:11.022713  115591 system_pods.go:89] "coredns-5dd5756b68-57z8q" [24c81a49-d80e-47df-86d2-0056ccc25858] Running
	I1206 20:01:11.022718  115591 system_pods.go:89] "coredns-5dd5756b68-8lsns" [14c5f16e-0c30-4602-b772-c6e0c8a577a8] Running
	I1206 20:01:11.022722  115591 system_pods.go:89] "etcd-embed-certs-209025" [e352dba2-c22b-4b21-9cb7-d641d29307a0] Running
	I1206 20:01:11.022726  115591 system_pods.go:89] "kube-apiserver-embed-certs-209025" [b4bfe0d1-0f1f-4e5e-96a4-94ec19cc1ab4] Running
	I1206 20:01:11.022730  115591 system_pods.go:89] "kube-controller-manager-embed-certs-209025" [1e9819fc-0187-4410-97f5-a517fb6b6595] Running
	I1206 20:01:11.022734  115591 system_pods.go:89] "kube-proxy-nf2cw" [5e49b3f8-7eee-4c04-ae22-75ccd216bb27] Running
	I1206 20:01:11.022738  115591 system_pods.go:89] "kube-scheduler-embed-certs-209025" [cc5d4d6f-515d-48b9-8d6f-83c33b0fa037] Running
	I1206 20:01:11.022744  115591 system_pods.go:89] "metrics-server-57f55c9bc5-5qxxj" [4eaddb4b-aec0-4cc7-b467-bb882bcba8a0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:11.022750  115591 system_pods.go:89] "storage-provisioner" [2417fc35-04fd-4dcf-9d16-2649a0d3bb3b] Running
	I1206 20:01:11.022762  115591 system_pods.go:126] duration metric: took 204.004835ms to wait for k8s-apps to be running ...
	I1206 20:01:11.022774  115591 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 20:01:11.022824  115591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 20:01:11.041212  115591 system_svc.go:56] duration metric: took 18.424469ms WaitForService to wait for kubelet.
	I1206 20:01:11.041256  115591 kubeadm.go:581] duration metric: took 6.578354937s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1206 20:01:11.041291  115591 node_conditions.go:102] verifying NodePressure condition ...
	I1206 20:01:11.219045  115591 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1206 20:01:11.219079  115591 node_conditions.go:123] node cpu capacity is 2
	I1206 20:01:11.219094  115591 node_conditions.go:105] duration metric: took 177.793737ms to run NodePressure ...
	I1206 20:01:11.219107  115591 start.go:228] waiting for startup goroutines ...
	I1206 20:01:11.219113  115591 start.go:233] waiting for cluster config update ...
	I1206 20:01:11.219125  115591 start.go:242] writing updated cluster config ...
	I1206 20:01:11.219482  115591 ssh_runner.go:195] Run: rm -f paused
	I1206 20:01:11.275863  115591 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1206 20:01:11.278074  115591 out.go:177] * Done! kubectl is now configured to use "embed-certs-209025" cluster and "default" namespace by default
	I1206 20:01:09.099590  115217 pod_ready.go:92] pod "coredns-5644d7b6d9-2nncf" in "kube-system" namespace has status "Ready":"True"
	I1206 20:01:09.099616  115217 pod_ready.go:81] duration metric: took 8.363590309s waiting for pod "coredns-5644d7b6d9-2nncf" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:09.099626  115217 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-f627j" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:09.103452  115217 pod_ready.go:97] error getting pod "coredns-5644d7b6d9-f627j" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-f627j" not found
	I1206 20:01:09.103485  115217 pod_ready.go:81] duration metric: took 3.845902ms waiting for pod "coredns-5644d7b6d9-f627j" in "kube-system" namespace to be "Ready" ...
	E1206 20:01:09.103499  115217 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5644d7b6d9-f627j" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-f627j" not found
	I1206 20:01:09.103507  115217 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wvqmw" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:09.110700  115217 pod_ready.go:92] pod "kube-proxy-wvqmw" in "kube-system" namespace has status "Ready":"True"
	I1206 20:01:09.110721  115217 pod_ready.go:81] duration metric: took 7.207091ms waiting for pod "kube-proxy-wvqmw" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:09.110729  115217 pod_ready.go:38] duration metric: took 8.477100108s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 20:01:09.110744  115217 api_server.go:52] waiting for apiserver process to appear ...
	I1206 20:01:09.110791  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 20:01:09.127244  115217 api_server.go:72] duration metric: took 8.855777965s to wait for apiserver process to appear ...
	I1206 20:01:09.127272  115217 api_server.go:88] waiting for apiserver healthz status ...
	I1206 20:01:09.127290  115217 api_server.go:253] Checking apiserver healthz at https://192.168.61.33:8443/healthz ...
	I1206 20:01:09.134411  115217 api_server.go:279] https://192.168.61.33:8443/healthz returned 200:
	ok
	I1206 20:01:09.135553  115217 api_server.go:141] control plane version: v1.16.0
	I1206 20:01:09.135578  115217 api_server.go:131] duration metric: took 8.298936ms to wait for apiserver health ...
	I1206 20:01:09.135589  115217 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 20:01:09.140145  115217 system_pods.go:59] 4 kube-system pods found
	I1206 20:01:09.140167  115217 system_pods.go:61] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:09.140172  115217 system_pods.go:61] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:09.140178  115217 system_pods.go:61] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:09.140183  115217 system_pods.go:61] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:09.140191  115217 system_pods.go:74] duration metric: took 4.595695ms to wait for pod list to return data ...
	I1206 20:01:09.140198  115217 default_sa.go:34] waiting for default service account to be created ...
	I1206 20:01:09.142852  115217 default_sa.go:45] found service account: "default"
	I1206 20:01:09.142877  115217 default_sa.go:55] duration metric: took 2.67139ms for default service account to be created ...
	I1206 20:01:09.142888  115217 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 20:01:09.145800  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:09.145822  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:09.145827  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:09.145833  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:09.145838  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:09.145856  115217 retry.go:31] will retry after 199.361191ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:09.351430  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:09.351475  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:09.351485  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:09.351497  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:09.351504  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:09.351529  115217 retry.go:31] will retry after 239.084983ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:09.595441  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:09.595479  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:09.595487  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:09.595498  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:09.595506  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:09.595528  115217 retry.go:31] will retry after 380.909676ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:09.982061  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:09.982088  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:09.982093  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:09.982101  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:09.982115  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:09.982133  115217 retry.go:31] will retry after 451.472574ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:10.439270  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:10.439303  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:10.439311  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:10.439321  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:10.439328  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:10.439350  115217 retry.go:31] will retry after 654.845182ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:11.101088  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:11.101129  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:11.101137  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:11.101147  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:11.101155  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:11.101178  115217 retry.go:31] will retry after 650.939663ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:11.757024  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:11.757053  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:11.757058  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:11.757065  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:11.757070  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:11.757088  115217 retry.go:31] will retry after 828.555469ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:12.591156  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:12.591193  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:12.591209  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:12.591220  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:12.591227  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:12.591254  115217 retry.go:31] will retry after 1.26518336s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:11.000472  115078 logs.go:123] Gathering logs for coredns [93aee471c37fc8008cd5a0ce741944f249d9cbf7a5cbb62828369850236b0f07] ...
	I1206 20:01:11.000505  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93aee471c37fc8008cd5a0ce741944f249d9cbf7a5cbb62828369850236b0f07"
	I1206 20:01:13.545345  115078 api_server.go:253] Checking apiserver healthz at https://192.168.39.5:8443/healthz ...
	I1206 20:01:13.551262  115078 api_server.go:279] https://192.168.39.5:8443/healthz returned 200:
	ok
	I1206 20:01:13.553129  115078 api_server.go:141] control plane version: v1.29.0-rc.1
	I1206 20:01:13.553161  115078 api_server.go:131] duration metric: took 4.022898619s to wait for apiserver health ...
	I1206 20:01:13.553173  115078 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 20:01:13.553204  115078 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 20:01:13.553287  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 20:01:13.619861  115078 cri.go:89] found id: "f5b4ca951aec7e7912fa233772fa1e5a4cceffad95c297ea5b1b968ce835d2eb"
	I1206 20:01:13.619892  115078 cri.go:89] found id: ""
	I1206 20:01:13.619903  115078 logs.go:284] 1 containers: [f5b4ca951aec7e7912fa233772fa1e5a4cceffad95c297ea5b1b968ce835d2eb]
	I1206 20:01:13.619994  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:13.625028  115078 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 20:01:13.625099  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 20:01:13.667275  115078 cri.go:89] found id: "7633ca5afa8aeb44e4c450285edb3b4ca09bd881a87e959894d7740313a7d861"
	I1206 20:01:13.667300  115078 cri.go:89] found id: ""
	I1206 20:01:13.667309  115078 logs.go:284] 1 containers: [7633ca5afa8aeb44e4c450285edb3b4ca09bd881a87e959894d7740313a7d861]
	I1206 20:01:13.667378  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:13.671673  115078 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 20:01:13.671740  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 20:01:13.713319  115078 cri.go:89] found id: "93aee471c37fc8008cd5a0ce741944f249d9cbf7a5cbb62828369850236b0f07"
	I1206 20:01:13.713351  115078 cri.go:89] found id: ""
	I1206 20:01:13.713361  115078 logs.go:284] 1 containers: [93aee471c37fc8008cd5a0ce741944f249d9cbf7a5cbb62828369850236b0f07]
	I1206 20:01:13.713428  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:13.718155  115078 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 20:01:13.718219  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 20:01:13.758383  115078 cri.go:89] found id: "c00065611a1f7003ff8edd2ac629e3c6bbdfa4e1167b1dfd412ee16e5a9d3dcd"
	I1206 20:01:13.758414  115078 cri.go:89] found id: ""
	I1206 20:01:13.758424  115078 logs.go:284] 1 containers: [c00065611a1f7003ff8edd2ac629e3c6bbdfa4e1167b1dfd412ee16e5a9d3dcd]
	I1206 20:01:13.758488  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:13.762747  115078 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 20:01:13.762826  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 20:01:13.803602  115078 cri.go:89] found id: "0da9ad5d9749cba59bd11e3aa5dd332ad76b5bdd0fcde476ce333d4069f0e259"
	I1206 20:01:13.803627  115078 cri.go:89] found id: ""
	I1206 20:01:13.803635  115078 logs.go:284] 1 containers: [0da9ad5d9749cba59bd11e3aa5dd332ad76b5bdd0fcde476ce333d4069f0e259]
	I1206 20:01:13.803685  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:13.808083  115078 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 20:01:13.808160  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 20:01:13.852504  115078 cri.go:89] found id: "43c8e91cea581258f9bf13a71487b9ffadc02ac982f7ff08fa649092e171fa87"
	I1206 20:01:13.852531  115078 cri.go:89] found id: ""
	I1206 20:01:13.852539  115078 logs.go:284] 1 containers: [43c8e91cea581258f9bf13a71487b9ffadc02ac982f7ff08fa649092e171fa87]
	I1206 20:01:13.852598  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:13.857213  115078 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 20:01:13.857322  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 20:01:13.896981  115078 cri.go:89] found id: ""
	I1206 20:01:13.897023  115078 logs.go:284] 0 containers: []
	W1206 20:01:13.897035  115078 logs.go:286] No container was found matching "kindnet"
	I1206 20:01:13.897044  115078 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 20:01:13.897110  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 20:01:13.940969  115078 cri.go:89] found id: "ec1601a49c79cb7e81473ecbb6e4506b82fc3b4942e91392653a6991579c5617"
	I1206 20:01:13.940996  115078 cri.go:89] found id: "d07b3a050ef1964c57b85cc91a3d17646fef6d2298b91d72c007f432c51942c9"
	I1206 20:01:13.941004  115078 cri.go:89] found id: ""
	I1206 20:01:13.941013  115078 logs.go:284] 2 containers: [ec1601a49c79cb7e81473ecbb6e4506b82fc3b4942e91392653a6991579c5617 d07b3a050ef1964c57b85cc91a3d17646fef6d2298b91d72c007f432c51942c9]
	I1206 20:01:13.941075  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:13.945508  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:13.949933  115078 logs.go:123] Gathering logs for kube-scheduler [c00065611a1f7003ff8edd2ac629e3c6bbdfa4e1167b1dfd412ee16e5a9d3dcd] ...
	I1206 20:01:13.949961  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c00065611a1f7003ff8edd2ac629e3c6bbdfa4e1167b1dfd412ee16e5a9d3dcd"
	I1206 20:01:13.986034  115078 logs.go:123] Gathering logs for kube-controller-manager [43c8e91cea581258f9bf13a71487b9ffadc02ac982f7ff08fa649092e171fa87] ...
	I1206 20:01:13.986065  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 43c8e91cea581258f9bf13a71487b9ffadc02ac982f7ff08fa649092e171fa87"
	I1206 20:01:14.045155  115078 logs.go:123] Gathering logs for storage-provisioner [ec1601a49c79cb7e81473ecbb6e4506b82fc3b4942e91392653a6991579c5617] ...
	I1206 20:01:14.045197  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec1601a49c79cb7e81473ecbb6e4506b82fc3b4942e91392653a6991579c5617"
	I1206 20:01:14.091205  115078 logs.go:123] Gathering logs for storage-provisioner [d07b3a050ef1964c57b85cc91a3d17646fef6d2298b91d72c007f432c51942c9] ...
	I1206 20:01:14.091240  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d07b3a050ef1964c57b85cc91a3d17646fef6d2298b91d72c007f432c51942c9"
	I1206 20:01:14.130184  115078 logs.go:123] Gathering logs for container status ...
	I1206 20:01:14.130221  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 20:01:14.176981  115078 logs.go:123] Gathering logs for dmesg ...
	I1206 20:01:14.177024  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 20:01:14.191755  115078 logs.go:123] Gathering logs for describe nodes ...
	I1206 20:01:14.191796  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1206 20:01:14.316375  115078 logs.go:123] Gathering logs for etcd [7633ca5afa8aeb44e4c450285edb3b4ca09bd881a87e959894d7740313a7d861] ...
	I1206 20:01:14.316413  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7633ca5afa8aeb44e4c450285edb3b4ca09bd881a87e959894d7740313a7d861"
	I1206 20:01:14.359700  115078 logs.go:123] Gathering logs for kubelet ...
	I1206 20:01:14.359746  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 20:01:14.415906  115078 logs.go:123] Gathering logs for kube-apiserver [f5b4ca951aec7e7912fa233772fa1e5a4cceffad95c297ea5b1b968ce835d2eb] ...
	I1206 20:01:14.415952  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5b4ca951aec7e7912fa233772fa1e5a4cceffad95c297ea5b1b968ce835d2eb"
	I1206 20:01:14.471453  115078 logs.go:123] Gathering logs for kube-proxy [0da9ad5d9749cba59bd11e3aa5dd332ad76b5bdd0fcde476ce333d4069f0e259] ...
	I1206 20:01:14.471496  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0da9ad5d9749cba59bd11e3aa5dd332ad76b5bdd0fcde476ce333d4069f0e259"
	I1206 20:01:14.520012  115078 logs.go:123] Gathering logs for coredns [93aee471c37fc8008cd5a0ce741944f249d9cbf7a5cbb62828369850236b0f07] ...
	I1206 20:01:14.520051  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93aee471c37fc8008cd5a0ce741944f249d9cbf7a5cbb62828369850236b0f07"
	I1206 20:01:14.567445  115078 logs.go:123] Gathering logs for CRI-O ...
	I1206 20:01:14.567482  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 20:01:17.434636  115078 system_pods.go:59] 8 kube-system pods found
	I1206 20:01:17.434671  115078 system_pods.go:61] "coredns-76f75df574-h9pkz" [05501356-bf9b-4a99-a1b9-40d0caef38db] Running
	I1206 20:01:17.434676  115078 system_pods.go:61] "etcd-no-preload-989559" [6c1cb748-a6a8-4583-b8fd-adf37e05b771] Running
	I1206 20:01:17.434680  115078 system_pods.go:61] "kube-apiserver-no-preload-989559" [51d8b7c6-0cef-4832-96b2-5040c0725310] Running
	I1206 20:01:17.434685  115078 system_pods.go:61] "kube-controller-manager-no-preload-989559" [cc8dfb88-9990-488f-9150-5c643143dcf1] Running
	I1206 20:01:17.434688  115078 system_pods.go:61] "kube-proxy-zgqvt" [550b2491-c14f-47c4-82d5-1301fa351305] Running
	I1206 20:01:17.434692  115078 system_pods.go:61] "kube-scheduler-no-preload-989559" [53a5031e-51aa-4867-88ff-1c7972a0cfa7] Running
	I1206 20:01:17.434700  115078 system_pods.go:61] "metrics-server-57f55c9bc5-vz7qc" [97c1bcd2-eabc-4029-bb02-5bbfd4d96c0f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:17.434706  115078 system_pods.go:61] "storage-provisioner" [c4d98de3-12ec-47f6-a6a6-f1dc61b479be] Running
	I1206 20:01:17.434714  115078 system_pods.go:74] duration metric: took 3.881535405s to wait for pod list to return data ...
	I1206 20:01:17.434724  115078 default_sa.go:34] waiting for default service account to be created ...
	I1206 20:01:17.437744  115078 default_sa.go:45] found service account: "default"
	I1206 20:01:17.437770  115078 default_sa.go:55] duration metric: took 3.038532ms for default service account to be created ...
	I1206 20:01:17.437780  115078 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 20:01:17.444539  115078 system_pods.go:86] 8 kube-system pods found
	I1206 20:01:17.444567  115078 system_pods.go:89] "coredns-76f75df574-h9pkz" [05501356-bf9b-4a99-a1b9-40d0caef38db] Running
	I1206 20:01:17.444572  115078 system_pods.go:89] "etcd-no-preload-989559" [6c1cb748-a6a8-4583-b8fd-adf37e05b771] Running
	I1206 20:01:17.444577  115078 system_pods.go:89] "kube-apiserver-no-preload-989559" [51d8b7c6-0cef-4832-96b2-5040c0725310] Running
	I1206 20:01:17.444583  115078 system_pods.go:89] "kube-controller-manager-no-preload-989559" [cc8dfb88-9990-488f-9150-5c643143dcf1] Running
	I1206 20:01:17.444587  115078 system_pods.go:89] "kube-proxy-zgqvt" [550b2491-c14f-47c4-82d5-1301fa351305] Running
	I1206 20:01:17.444592  115078 system_pods.go:89] "kube-scheduler-no-preload-989559" [53a5031e-51aa-4867-88ff-1c7972a0cfa7] Running
	I1206 20:01:17.444602  115078 system_pods.go:89] "metrics-server-57f55c9bc5-vz7qc" [97c1bcd2-eabc-4029-bb02-5bbfd4d96c0f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:17.444608  115078 system_pods.go:89] "storage-provisioner" [c4d98de3-12ec-47f6-a6a6-f1dc61b479be] Running
	I1206 20:01:17.444619  115078 system_pods.go:126] duration metric: took 6.832576ms to wait for k8s-apps to be running ...
	I1206 20:01:17.444629  115078 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 20:01:17.444687  115078 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 20:01:17.464821  115078 system_svc.go:56] duration metric: took 20.181153ms WaitForService to wait for kubelet.
	I1206 20:01:17.464866  115078 kubeadm.go:581] duration metric: took 4m24.398841426s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1206 20:01:17.464894  115078 node_conditions.go:102] verifying NodePressure condition ...
	I1206 20:01:17.467938  115078 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1206 20:01:17.467964  115078 node_conditions.go:123] node cpu capacity is 2
	I1206 20:01:17.467975  115078 node_conditions.go:105] duration metric: took 3.076458ms to run NodePressure ...
	I1206 20:01:17.467988  115078 start.go:228] waiting for startup goroutines ...
	I1206 20:01:17.467994  115078 start.go:233] waiting for cluster config update ...
	I1206 20:01:17.468004  115078 start.go:242] writing updated cluster config ...
	I1206 20:01:17.468290  115078 ssh_runner.go:195] Run: rm -f paused
	I1206 20:01:17.523451  115078 start.go:600] kubectl: 1.28.4, cluster: 1.29.0-rc.1 (minor skew: 1)
	I1206 20:01:17.525609  115078 out.go:177] * Done! kubectl is now configured to use "no-preload-989559" cluster and "default" namespace by default
	I1206 20:01:13.862479  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:13.862506  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:13.862512  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:13.862519  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:13.862523  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:13.862542  115217 retry.go:31] will retry after 1.299046526s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:15.166601  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:15.166630  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:15.166635  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:15.166642  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:15.166647  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:15.166667  115217 retry.go:31] will retry after 1.832151574s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:17.005707  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:17.005739  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:17.005746  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:17.005754  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:17.005774  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:17.005797  115217 retry.go:31] will retry after 1.796371959s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:18.808729  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:18.808757  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:18.808763  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:18.808770  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:18.808775  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:18.808792  115217 retry.go:31] will retry after 2.814845209s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:21.630762  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:21.630791  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:21.630796  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:21.630811  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:21.630816  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:21.630834  115217 retry.go:31] will retry after 2.866148194s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:24.502168  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:24.502198  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:24.502203  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:24.502211  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:24.502215  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:24.502233  115217 retry.go:31] will retry after 3.777894628s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:28.284776  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:28.284812  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:28.284818  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:28.284825  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:28.284829  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:28.284847  115217 retry.go:31] will retry after 4.837538668s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:33.127301  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:33.127330  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:33.127336  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:33.127344  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:33.127349  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:33.127370  115217 retry.go:31] will retry after 6.833662344s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:39.966417  115217 system_pods.go:86] 5 kube-system pods found
	I1206 20:01:39.966450  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:39.966458  115217 system_pods.go:89] "kube-apiserver-old-k8s-version-448851" [ecace4aa-bc86-43ed-9067-365504abbf70] Pending
	I1206 20:01:39.966465  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:39.966476  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:39.966483  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:39.966504  115217 retry.go:31] will retry after 9.204033337s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:49.176395  115217 system_pods.go:86] 8 kube-system pods found
	I1206 20:01:49.176434  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:49.176442  115217 system_pods.go:89] "etcd-old-k8s-version-448851" [91d55b2e-4361-4615-a99c-d1338c427d81] Pending
	I1206 20:01:49.176450  115217 system_pods.go:89] "kube-apiserver-old-k8s-version-448851" [ecace4aa-bc86-43ed-9067-365504abbf70] Running
	I1206 20:01:49.176457  115217 system_pods.go:89] "kube-controller-manager-old-k8s-version-448851" [cf55eb16-4a36-4d70-bb22-4cab5f9f7358] Running
	I1206 20:01:49.176462  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:49.176469  115217 system_pods.go:89] "kube-scheduler-old-k8s-version-448851" [373cb698-190a-480d-ac74-4ea990474ad1] Pending
	I1206 20:01:49.176479  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:49.176487  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:49.176511  115217 retry.go:31] will retry after 9.456016194s: missing components: etcd, kube-scheduler
	I1206 20:01:58.638807  115217 system_pods.go:86] 8 kube-system pods found
	I1206 20:01:58.638837  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:58.638842  115217 system_pods.go:89] "etcd-old-k8s-version-448851" [91d55b2e-4361-4615-a99c-d1338c427d81] Running
	I1206 20:01:58.638847  115217 system_pods.go:89] "kube-apiserver-old-k8s-version-448851" [ecace4aa-bc86-43ed-9067-365504abbf70] Running
	I1206 20:01:58.638851  115217 system_pods.go:89] "kube-controller-manager-old-k8s-version-448851" [cf55eb16-4a36-4d70-bb22-4cab5f9f7358] Running
	I1206 20:01:58.638855  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:58.638861  115217 system_pods.go:89] "kube-scheduler-old-k8s-version-448851" [373cb698-190a-480d-ac74-4ea990474ad1] Running
	I1206 20:01:58.638867  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:58.638872  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:58.638879  115217 system_pods.go:126] duration metric: took 49.495986809s to wait for k8s-apps to be running ...
	I1206 20:01:58.638886  115217 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 20:01:58.638935  115217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 20:01:58.654683  115217 system_svc.go:56] duration metric: took 15.783018ms WaitForService to wait for kubelet.
	I1206 20:01:58.654715  115217 kubeadm.go:581] duration metric: took 58.383258338s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1206 20:01:58.654738  115217 node_conditions.go:102] verifying NodePressure condition ...
	I1206 20:01:58.659189  115217 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1206 20:01:58.659215  115217 node_conditions.go:123] node cpu capacity is 2
	I1206 20:01:58.659226  115217 node_conditions.go:105] duration metric: took 4.482979ms to run NodePressure ...
	I1206 20:01:58.659239  115217 start.go:228] waiting for startup goroutines ...
	I1206 20:01:58.659245  115217 start.go:233] waiting for cluster config update ...
	I1206 20:01:58.659255  115217 start.go:242] writing updated cluster config ...
	I1206 20:01:58.659522  115217 ssh_runner.go:195] Run: rm -f paused
	I1206 20:01:58.710716  115217 start.go:600] kubectl: 1.28.4, cluster: 1.16.0 (minor skew: 12)
	I1206 20:01:58.713372  115217 out.go:177] 
	W1206 20:01:58.714711  115217 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.16.0.
	I1206 20:01:58.716208  115217 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1206 20:01:58.717734  115217 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-448851" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-12-06 19:55:36 UTC, ends at Wed 2023-12-06 20:10:13 UTC. --
	Dec 06 20:10:13 embed-certs-209025 crio[715]: time="2023-12-06 20:10:13.090666553Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=2521a0a9-62bd-40f6-89c6-d82b44eadb62 name=/runtime.v1.RuntimeService/Version
	Dec 06 20:10:13 embed-certs-209025 crio[715]: time="2023-12-06 20:10:13.092454133Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=0ffb20c9-f9f0-4998-9747-def5e470ab2e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 20:10:13 embed-certs-209025 crio[715]: time="2023-12-06 20:10:13.092920490Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701893413092903581,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=0ffb20c9-f9f0-4998-9747-def5e470ab2e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 20:10:13 embed-certs-209025 crio[715]: time="2023-12-06 20:10:13.093549997Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a52b9c41-d13b-4f43-8dc0-617590c6444a name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:10:13 embed-certs-209025 crio[715]: time="2023-12-06 20:10:13.093623422Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a52b9c41-d13b-4f43-8dc0-617590c6444a name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:10:13 embed-certs-209025 crio[715]: time="2023-12-06 20:10:13.093865304Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ad375b57a7bfd4aeba26bc78c2535a0b637de33baa8344d21033aee93b66a963,PodSandboxId:aa8ddc84680befc4b30a234c4249bceeb52eb15e429c711e2838567689ba1a68,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701892868842368671,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2417fc35-04fd-4dcf-9d16-2649a0d3bb3b,},Annotations:map[string]string{io.kubernetes.container.hash: db237a82,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f038b2fcbbc60342831a47869f2147b4b99c785c6e227a393193e1b1f896e7e8,PodSandboxId:8ab5d5e9cbd4db30e175df17c5ab87e5bc854d12243d7346ec8571c843c23d3e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701892867941460913,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8lsns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14c5f16e-0c30-4602-b772-c6e0c8a577a8,},Annotations:map[string]string{io.kubernetes.container.hash: db547275,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba55c737b2f857dcaf6c9ad188bf4df852a56c2adba97f930243cff54ec613bf,PodSandboxId:21cbb5eeafb68d4a273894ca170c79a5e7104c4501ee4c3690eec3cc1087e7f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701892867964959305,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-57z8q,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: 24c81a49-d80e-47df-86d2-0056ccc25858,},Annotations:map[string]string{io.kubernetes.container.hash: aa1d6e99,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a3aaa502aefb75f5b5d755d86f8f904b25753fe2f35b086d51680ad1f49e319,PodSandboxId:f88e50648869ea191c725014e7d910bea76e2185b599b8650515c8de1848b687,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RU
NNING,CreatedAt:1701892865536310947,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nf2cw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e49b3f8-7eee-4c04-ae22-75ccd216bb27,},Annotations:map[string]string{io.kubernetes.container.hash: 7bf9d0af,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:279722e0476001fe1aae961b651ca615f7634985041754008e4f24944d10c082,PodSandboxId:c7ab5554d8445414b077ecb101830a0e882e70cd31ab450023fa9970a958f798,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:17018928422
48274452,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-209025,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bd71b809324a6eca42b4ebc7a97ad34,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:101928f953b6e16d024b444d21598dc8a8db9e6ab4620d3fd64d93daf34cc3d5,PodSandboxId:3f327ecd16f2f36fcac75781c6558c2970ba96313eb3dcd94908b425416f6978,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701892842507300167,Labels:map[string]str
ing{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-209025,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5e880c82b42dbac88e3f6043104b285,},Annotations:map[string]string{io.kubernetes.container.hash: 73813bbe,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa67f5071999cfc4d54edfa0e62dc8e8bca3808c5aff1ef8c3b6b160c30380f2,PodSandboxId:c6c5b9af8927b8a0af65ccf34c01f1e92567fadd2ff818a08e996b210e53ad69,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701892841943383872,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-209025,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfacf53695bcf209fb2e55d303df2a45,},Annotations:map[string]string{io.kubernetes.container.hash: b43ab966,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:845783e64fc221ad090b0cdd61ae73409ac477bbac6393a11553a57ca6cfd04e,PodSandboxId:89acf66f8001149e0cb8897ed1c54a9d123265e428eff9ab47f00e76e92ce25c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701892841761364461,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-209025,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03c61231636f1ecaceb5a6fff900bad8,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a52b9c41-d13b-4f43-8dc0-617590c6444a name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:10:13 embed-certs-209025 crio[715]: time="2023-12-06 20:10:13.123959889Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=227f0df4-1ef9-4b7c-b7be-e24d35fd0616 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 06 20:10:13 embed-certs-209025 crio[715]: time="2023-12-06 20:10:13.124274706Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:aa8ddc84680befc4b30a234c4249bceeb52eb15e429c711e2838567689ba1a68,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:2417fc35-04fd-4dcf-9d16-2649a0d3bb3b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701892867527342957,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2417fc35-04fd-4dcf-9d16-2649a0d3bb3b,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube
-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-12-06T20:01:07.189722224Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f8ba5209dd5ef5434ee470c9de610113e7f91c4a7a69c2d749c62f577625e32b,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-5qxxj,Uid:4eaddb4b-aec0-4cc7-b467-bb882bcba8a0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701892867327202816,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-5qxxj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4eaddb4b-aec0-4cc7-b467-bb882bcba8a
0,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-06T20:01:06.983123692Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:21cbb5eeafb68d4a273894ca170c79a5e7104c4501ee4c3690eec3cc1087e7f5,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-57z8q,Uid:24c81a49-d80e-47df-86d2-0056ccc25858,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701892866064452661,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-57z8q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24c81a49-d80e-47df-86d2-0056ccc25858,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-06T20:01:04.228284758Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8ab5d5e9cbd4db30e175df17c5ab87e5bc854d12243d7346ec8571c843c23d3e,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-8lsns,Uid:14c5f16e-0c30-4602
-b772-c6e0c8a577a8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701892865950693371,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-8lsns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14c5f16e-0c30-4602-b772-c6e0c8a577a8,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-06T20:01:04.115906592Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f88e50648869ea191c725014e7d910bea76e2185b599b8650515c8de1848b687,Metadata:&PodSandboxMetadata{Name:kube-proxy-nf2cw,Uid:5e49b3f8-7eee-4c04-ae22-75ccd216bb27,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701892864881347668,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-nf2cw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e49b3f8-7eee-4c04-ae22-75ccd216bb27,k8s-app: kube-proxy,pod-tem
plate-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-06T20:01:03.925334053Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c7ab5554d8445414b077ecb101830a0e882e70cd31ab450023fa9970a958f798,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-209025,Uid:6bd71b809324a6eca42b4ebc7a97ad34,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701892841163108214,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-209025,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bd71b809324a6eca42b4ebc7a97ad34,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 6bd71b809324a6eca42b4ebc7a97ad34,kubernetes.io/config.seen: 2023-12-06T20:00:40.583262138Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c6c5b9af8927b8a0af65ccf34c01f1e92567fadd2ff818a08e996b210e53ad69,Metadata:&PodSandboxMetadata{Name:kube-apiserver
-embed-certs-209025,Uid:bfacf53695bcf209fb2e55d303df2a45,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701892841141532623,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-209025,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfacf53695bcf209fb2e55d303df2a45,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.164:8443,kubernetes.io/config.hash: bfacf53695bcf209fb2e55d303df2a45,kubernetes.io/config.seen: 2023-12-06T20:00:40.583260447Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:89acf66f8001149e0cb8897ed1c54a9d123265e428eff9ab47f00e76e92ce25c,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-embed-certs-209025,Uid:03c61231636f1ecaceb5a6fff900bad8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701892841116716353,Labels:map[string]string{component: kube-controller-mana
ger,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-209025,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03c61231636f1ecaceb5a6fff900bad8,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 03c61231636f1ecaceb5a6fff900bad8,kubernetes.io/config.seen: 2023-12-06T20:00:40.583261455Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3f327ecd16f2f36fcac75781c6558c2970ba96313eb3dcd94908b425416f6978,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-209025,Uid:c5e880c82b42dbac88e3f6043104b285,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701892841076794690,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-209025,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5e880c82b42dbac88e3f6043104b285,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.5
0.164:2379,kubernetes.io/config.hash: c5e880c82b42dbac88e3f6043104b285,kubernetes.io/config.seen: 2023-12-06T20:00:40.583257167Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=227f0df4-1ef9-4b7c-b7be-e24d35fd0616 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 06 20:10:13 embed-certs-209025 crio[715]: time="2023-12-06 20:10:13.125307350Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a6d10b25-4b35-4f01-ba4e-28dee2d01128 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:10:13 embed-certs-209025 crio[715]: time="2023-12-06 20:10:13.125409726Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a6d10b25-4b35-4f01-ba4e-28dee2d01128 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:10:13 embed-certs-209025 crio[715]: time="2023-12-06 20:10:13.125597448Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ad375b57a7bfd4aeba26bc78c2535a0b637de33baa8344d21033aee93b66a963,PodSandboxId:aa8ddc84680befc4b30a234c4249bceeb52eb15e429c711e2838567689ba1a68,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701892868842368671,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2417fc35-04fd-4dcf-9d16-2649a0d3bb3b,},Annotations:map[string]string{io.kubernetes.container.hash: db237a82,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f038b2fcbbc60342831a47869f2147b4b99c785c6e227a393193e1b1f896e7e8,PodSandboxId:8ab5d5e9cbd4db30e175df17c5ab87e5bc854d12243d7346ec8571c843c23d3e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701892867941460913,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8lsns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14c5f16e-0c30-4602-b772-c6e0c8a577a8,},Annotations:map[string]string{io.kubernetes.container.hash: db547275,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba55c737b2f857dcaf6c9ad188bf4df852a56c2adba97f930243cff54ec613bf,PodSandboxId:21cbb5eeafb68d4a273894ca170c79a5e7104c4501ee4c3690eec3cc1087e7f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701892867964959305,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-57z8q,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: 24c81a49-d80e-47df-86d2-0056ccc25858,},Annotations:map[string]string{io.kubernetes.container.hash: aa1d6e99,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a3aaa502aefb75f5b5d755d86f8f904b25753fe2f35b086d51680ad1f49e319,PodSandboxId:f88e50648869ea191c725014e7d910bea76e2185b599b8650515c8de1848b687,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RU
NNING,CreatedAt:1701892865536310947,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nf2cw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e49b3f8-7eee-4c04-ae22-75ccd216bb27,},Annotations:map[string]string{io.kubernetes.container.hash: 7bf9d0af,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:279722e0476001fe1aae961b651ca615f7634985041754008e4f24944d10c082,PodSandboxId:c7ab5554d8445414b077ecb101830a0e882e70cd31ab450023fa9970a958f798,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:17018928422
48274452,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-209025,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bd71b809324a6eca42b4ebc7a97ad34,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:101928f953b6e16d024b444d21598dc8a8db9e6ab4620d3fd64d93daf34cc3d5,PodSandboxId:3f327ecd16f2f36fcac75781c6558c2970ba96313eb3dcd94908b425416f6978,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701892842507300167,Labels:map[string]str
ing{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-209025,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5e880c82b42dbac88e3f6043104b285,},Annotations:map[string]string{io.kubernetes.container.hash: 73813bbe,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa67f5071999cfc4d54edfa0e62dc8e8bca3808c5aff1ef8c3b6b160c30380f2,PodSandboxId:c6c5b9af8927b8a0af65ccf34c01f1e92567fadd2ff818a08e996b210e53ad69,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701892841943383872,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-209025,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfacf53695bcf209fb2e55d303df2a45,},Annotations:map[string]string{io.kubernetes.container.hash: b43ab966,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:845783e64fc221ad090b0cdd61ae73409ac477bbac6393a11553a57ca6cfd04e,PodSandboxId:89acf66f8001149e0cb8897ed1c54a9d123265e428eff9ab47f00e76e92ce25c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701892841761364461,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-209025,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03c61231636f1ecaceb5a6fff900bad8,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a6d10b25-4b35-4f01-ba4e-28dee2d01128 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:10:13 embed-certs-209025 crio[715]: time="2023-12-06 20:10:13.143177946Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=3eaf565e-a936-488c-9696-388a7db091d0 name=/runtime.v1.RuntimeService/Version
	Dec 06 20:10:13 embed-certs-209025 crio[715]: time="2023-12-06 20:10:13.143281491Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=3eaf565e-a936-488c-9696-388a7db091d0 name=/runtime.v1.RuntimeService/Version
	Dec 06 20:10:13 embed-certs-209025 crio[715]: time="2023-12-06 20:10:13.145816529Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=7f989a66-c264-4558-b154-42c4df552596 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 20:10:13 embed-certs-209025 crio[715]: time="2023-12-06 20:10:13.146376138Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701893413146357662,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=7f989a66-c264-4558-b154-42c4df552596 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 20:10:13 embed-certs-209025 crio[715]: time="2023-12-06 20:10:13.147185540Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=973409dd-ee12-4895-b6fc-a04779d2f647 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:10:13 embed-certs-209025 crio[715]: time="2023-12-06 20:10:13.147263592Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=973409dd-ee12-4895-b6fc-a04779d2f647 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:10:13 embed-certs-209025 crio[715]: time="2023-12-06 20:10:13.147518034Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ad375b57a7bfd4aeba26bc78c2535a0b637de33baa8344d21033aee93b66a963,PodSandboxId:aa8ddc84680befc4b30a234c4249bceeb52eb15e429c711e2838567689ba1a68,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701892868842368671,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2417fc35-04fd-4dcf-9d16-2649a0d3bb3b,},Annotations:map[string]string{io.kubernetes.container.hash: db237a82,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f038b2fcbbc60342831a47869f2147b4b99c785c6e227a393193e1b1f896e7e8,PodSandboxId:8ab5d5e9cbd4db30e175df17c5ab87e5bc854d12243d7346ec8571c843c23d3e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701892867941460913,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8lsns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14c5f16e-0c30-4602-b772-c6e0c8a577a8,},Annotations:map[string]string{io.kubernetes.container.hash: db547275,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba55c737b2f857dcaf6c9ad188bf4df852a56c2adba97f930243cff54ec613bf,PodSandboxId:21cbb5eeafb68d4a273894ca170c79a5e7104c4501ee4c3690eec3cc1087e7f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701892867964959305,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-57z8q,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: 24c81a49-d80e-47df-86d2-0056ccc25858,},Annotations:map[string]string{io.kubernetes.container.hash: aa1d6e99,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a3aaa502aefb75f5b5d755d86f8f904b25753fe2f35b086d51680ad1f49e319,PodSandboxId:f88e50648869ea191c725014e7d910bea76e2185b599b8650515c8de1848b687,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RU
NNING,CreatedAt:1701892865536310947,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nf2cw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e49b3f8-7eee-4c04-ae22-75ccd216bb27,},Annotations:map[string]string{io.kubernetes.container.hash: 7bf9d0af,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:279722e0476001fe1aae961b651ca615f7634985041754008e4f24944d10c082,PodSandboxId:c7ab5554d8445414b077ecb101830a0e882e70cd31ab450023fa9970a958f798,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:17018928422
48274452,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-209025,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bd71b809324a6eca42b4ebc7a97ad34,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:101928f953b6e16d024b444d21598dc8a8db9e6ab4620d3fd64d93daf34cc3d5,PodSandboxId:3f327ecd16f2f36fcac75781c6558c2970ba96313eb3dcd94908b425416f6978,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701892842507300167,Labels:map[string]str
ing{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-209025,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5e880c82b42dbac88e3f6043104b285,},Annotations:map[string]string{io.kubernetes.container.hash: 73813bbe,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa67f5071999cfc4d54edfa0e62dc8e8bca3808c5aff1ef8c3b6b160c30380f2,PodSandboxId:c6c5b9af8927b8a0af65ccf34c01f1e92567fadd2ff818a08e996b210e53ad69,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701892841943383872,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-209025,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfacf53695bcf209fb2e55d303df2a45,},Annotations:map[string]string{io.kubernetes.container.hash: b43ab966,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:845783e64fc221ad090b0cdd61ae73409ac477bbac6393a11553a57ca6cfd04e,PodSandboxId:89acf66f8001149e0cb8897ed1c54a9d123265e428eff9ab47f00e76e92ce25c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701892841761364461,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-209025,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03c61231636f1ecaceb5a6fff900bad8,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=973409dd-ee12-4895-b6fc-a04779d2f647 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:10:13 embed-certs-209025 crio[715]: time="2023-12-06 20:10:13.188905341Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=25d44d21-1e75-4aae-8958-69cc18cc0ef5 name=/runtime.v1.RuntimeService/Version
	Dec 06 20:10:13 embed-certs-209025 crio[715]: time="2023-12-06 20:10:13.189010388Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=25d44d21-1e75-4aae-8958-69cc18cc0ef5 name=/runtime.v1.RuntimeService/Version
	Dec 06 20:10:13 embed-certs-209025 crio[715]: time="2023-12-06 20:10:13.193563902Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=a514015d-d162-4888-9985-b38a75d295e8 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 20:10:13 embed-certs-209025 crio[715]: time="2023-12-06 20:10:13.194085811Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701893413194070917,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=a514015d-d162-4888-9985-b38a75d295e8 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 20:10:13 embed-certs-209025 crio[715]: time="2023-12-06 20:10:13.195038238Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e84b80ae-adc7-4619-811c-da21f5d1be57 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:10:13 embed-certs-209025 crio[715]: time="2023-12-06 20:10:13.195109771Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e84b80ae-adc7-4619-811c-da21f5d1be57 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:10:13 embed-certs-209025 crio[715]: time="2023-12-06 20:10:13.195303868Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ad375b57a7bfd4aeba26bc78c2535a0b637de33baa8344d21033aee93b66a963,PodSandboxId:aa8ddc84680befc4b30a234c4249bceeb52eb15e429c711e2838567689ba1a68,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701892868842368671,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2417fc35-04fd-4dcf-9d16-2649a0d3bb3b,},Annotations:map[string]string{io.kubernetes.container.hash: db237a82,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f038b2fcbbc60342831a47869f2147b4b99c785c6e227a393193e1b1f896e7e8,PodSandboxId:8ab5d5e9cbd4db30e175df17c5ab87e5bc854d12243d7346ec8571c843c23d3e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701892867941460913,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8lsns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14c5f16e-0c30-4602-b772-c6e0c8a577a8,},Annotations:map[string]string{io.kubernetes.container.hash: db547275,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba55c737b2f857dcaf6c9ad188bf4df852a56c2adba97f930243cff54ec613bf,PodSandboxId:21cbb5eeafb68d4a273894ca170c79a5e7104c4501ee4c3690eec3cc1087e7f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701892867964959305,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-57z8q,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: 24c81a49-d80e-47df-86d2-0056ccc25858,},Annotations:map[string]string{io.kubernetes.container.hash: aa1d6e99,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a3aaa502aefb75f5b5d755d86f8f904b25753fe2f35b086d51680ad1f49e319,PodSandboxId:f88e50648869ea191c725014e7d910bea76e2185b599b8650515c8de1848b687,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RU
NNING,CreatedAt:1701892865536310947,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nf2cw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e49b3f8-7eee-4c04-ae22-75ccd216bb27,},Annotations:map[string]string{io.kubernetes.container.hash: 7bf9d0af,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:279722e0476001fe1aae961b651ca615f7634985041754008e4f24944d10c082,PodSandboxId:c7ab5554d8445414b077ecb101830a0e882e70cd31ab450023fa9970a958f798,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:17018928422
48274452,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-209025,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bd71b809324a6eca42b4ebc7a97ad34,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:101928f953b6e16d024b444d21598dc8a8db9e6ab4620d3fd64d93daf34cc3d5,PodSandboxId:3f327ecd16f2f36fcac75781c6558c2970ba96313eb3dcd94908b425416f6978,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701892842507300167,Labels:map[string]str
ing{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-209025,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5e880c82b42dbac88e3f6043104b285,},Annotations:map[string]string{io.kubernetes.container.hash: 73813bbe,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa67f5071999cfc4d54edfa0e62dc8e8bca3808c5aff1ef8c3b6b160c30380f2,PodSandboxId:c6c5b9af8927b8a0af65ccf34c01f1e92567fadd2ff818a08e996b210e53ad69,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701892841943383872,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-209025,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfacf53695bcf209fb2e55d303df2a45,},Annotations:map[string]string{io.kubernetes.container.hash: b43ab966,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:845783e64fc221ad090b0cdd61ae73409ac477bbac6393a11553a57ca6cfd04e,PodSandboxId:89acf66f8001149e0cb8897ed1c54a9d123265e428eff9ab47f00e76e92ce25c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701892841761364461,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-209025,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03c61231636f1ecaceb5a6fff900bad8,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e84b80ae-adc7-4619-811c-da21f5d1be57 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ad375b57a7bfd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   aa8ddc84680be       storage-provisioner
	ba55c737b2f85       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   9 minutes ago       Running             coredns                   0                   21cbb5eeafb68       coredns-5dd5756b68-57z8q
	f038b2fcbbc60       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   9 minutes ago       Running             coredns                   0                   8ab5d5e9cbd4d       coredns-5dd5756b68-8lsns
	5a3aaa502aefb       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   9 minutes ago       Running             kube-proxy                0                   f88e50648869e       kube-proxy-nf2cw
	101928f953b6e       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   9 minutes ago       Running             etcd                      2                   3f327ecd16f2f       etcd-embed-certs-209025
	279722e047600       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   9 minutes ago       Running             kube-scheduler            2                   c7ab5554d8445       kube-scheduler-embed-certs-209025
	fa67f5071999c       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   9 minutes ago       Running             kube-apiserver            2                   c6c5b9af8927b       kube-apiserver-embed-certs-209025
	845783e64fc22       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   9 minutes ago       Running             kube-controller-manager   2                   89acf66f80011       kube-controller-manager-embed-certs-209025
	
	* 
	* ==> coredns [ba55c737b2f857dcaf6c9ad188bf4df852a56c2adba97f930243cff54ec613bf] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:33554 - 49889 "HINFO IN 9130365448740154584.8350522042857180029. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.027820732s
	
	* 
	* ==> coredns [f038b2fcbbc60342831a47869f2147b4b99c785c6e227a393193e1b1f896e7e8] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	[INFO] Reloading complete
	[INFO] 127.0.0.1:42945 - 9398 "HINFO IN 2600546345607168387.6013047244688371649. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.029321211s
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-209025
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-209025
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=31a3600ce72029d920a55140bbc6d0705e357503
	                    minikube.k8s.io/name=embed-certs-209025
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_06T20_00_50_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 06 Dec 2023 20:00:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-209025
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 06 Dec 2023 20:10:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 06 Dec 2023 20:06:17 +0000   Wed, 06 Dec 2023 20:00:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 06 Dec 2023 20:06:17 +0000   Wed, 06 Dec 2023 20:00:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 06 Dec 2023 20:06:17 +0000   Wed, 06 Dec 2023 20:00:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 06 Dec 2023 20:06:17 +0000   Wed, 06 Dec 2023 20:00:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.164
	  Hostname:    embed-certs-209025
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 70c42a0214ba45939561709350295f75
	  System UUID:                70c42a02-14ba-4593-9561-709350295f75
	  Boot ID:                    a907d52f-eaa3-4a92-b99d-6589e3dd4745
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-57z8q                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m9s
	  kube-system                 coredns-5dd5756b68-8lsns                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m9s
	  kube-system                 etcd-embed-certs-209025                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m22s
	  kube-system                 kube-apiserver-embed-certs-209025             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m25s
	  kube-system                 kube-controller-manager-embed-certs-209025    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m24s
	  kube-system                 kube-proxy-nf2cw                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m10s
	  kube-system                 kube-scheduler-embed-certs-209025             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m22s
	  kube-system                 metrics-server-57f55c9bc5-5qxxj               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m7s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m4s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  9m33s (x8 over 9m33s)  kubelet          Node embed-certs-209025 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m33s (x8 over 9m33s)  kubelet          Node embed-certs-209025 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m33s (x7 over 9m33s)  kubelet          Node embed-certs-209025 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m33s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m23s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m22s                  kubelet          Node embed-certs-209025 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m22s                  kubelet          Node embed-certs-209025 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m22s                  kubelet          Node embed-certs-209025 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             9m22s                  kubelet          Node embed-certs-209025 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  9m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m22s                  kubelet          Node embed-certs-209025 status is now: NodeReady
	  Normal  RegisteredNode           9m10s                  node-controller  Node embed-certs-209025 event: Registered Node embed-certs-209025 in Controller
	
	* 
	* ==> dmesg <==
	* [Dec 6 19:55] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.070173] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.714144] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.561472] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.154930] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.444118] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.177474] systemd-fstab-generator[641]: Ignoring "noauto" for root device
	[  +0.122000] systemd-fstab-generator[652]: Ignoring "noauto" for root device
	[  +0.157167] systemd-fstab-generator[665]: Ignoring "noauto" for root device
	[  +0.130994] systemd-fstab-generator[676]: Ignoring "noauto" for root device
	[  +0.243403] systemd-fstab-generator[700]: Ignoring "noauto" for root device
	[Dec 6 19:56] systemd-fstab-generator[916]: Ignoring "noauto" for root device
	[ +18.993905] kauditd_printk_skb: 29 callbacks suppressed
	[Dec 6 20:00] systemd-fstab-generator[3504]: Ignoring "noauto" for root device
	[ +10.323945] systemd-fstab-generator[3828]: Ignoring "noauto" for root device
	[Dec 6 20:01] kauditd_printk_skb: 2 callbacks suppressed
	
	* 
	* ==> etcd [101928f953b6e16d024b444d21598dc8a8db9e6ab4620d3fd64d93daf34cc3d5] <==
	* {"level":"info","ts":"2023-12-06T20:00:44.509239Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-12-06T20:00:44.509607Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"80a63a57d726c697","initial-advertise-peer-urls":["https://192.168.50.164:2380"],"listen-peer-urls":["https://192.168.50.164:2380"],"advertise-client-urls":["https://192.168.50.164:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.164:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-12-06T20:00:44.513824Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-12-06T20:00:44.51387Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.164:2380"}
	{"level":"info","ts":"2023-12-06T20:00:44.515024Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.164:2380"}
	{"level":"info","ts":"2023-12-06T20:00:44.514653Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"80a63a57d726c697 switched to configuration voters=(9270161031934953111)"}
	{"level":"info","ts":"2023-12-06T20:00:44.515481Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"d41e51b80202c3fb","local-member-id":"80a63a57d726c697","added-peer-id":"80a63a57d726c697","added-peer-peer-urls":["https://192.168.50.164:2380"]}
	{"level":"info","ts":"2023-12-06T20:00:44.74455Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"80a63a57d726c697 is starting a new election at term 1"}
	{"level":"info","ts":"2023-12-06T20:00:44.744634Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"80a63a57d726c697 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-12-06T20:00:44.744673Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"80a63a57d726c697 received MsgPreVoteResp from 80a63a57d726c697 at term 1"}
	{"level":"info","ts":"2023-12-06T20:00:44.744691Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"80a63a57d726c697 became candidate at term 2"}
	{"level":"info","ts":"2023-12-06T20:00:44.744701Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"80a63a57d726c697 received MsgVoteResp from 80a63a57d726c697 at term 2"}
	{"level":"info","ts":"2023-12-06T20:00:44.744712Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"80a63a57d726c697 became leader at term 2"}
	{"level":"info","ts":"2023-12-06T20:00:44.744723Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 80a63a57d726c697 elected leader 80a63a57d726c697 at term 2"}
	{"level":"info","ts":"2023-12-06T20:00:44.746881Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-06T20:00:44.748083Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"80a63a57d726c697","local-member-attributes":"{Name:embed-certs-209025 ClientURLs:[https://192.168.50.164:2379]}","request-path":"/0/members/80a63a57d726c697/attributes","cluster-id":"d41e51b80202c3fb","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-06T20:00:44.74816Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-06T20:00:44.749635Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-06T20:00:44.750227Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-06T20:00:44.760539Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.164:2379"}
	{"level":"info","ts":"2023-12-06T20:00:44.765357Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d41e51b80202c3fb","local-member-id":"80a63a57d726c697","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-06T20:00:44.765992Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-06T20:00:44.766131Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-06T20:00:44.780125Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-06T20:00:44.780314Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  20:10:13 up 14 min,  0 users,  load average: 0.33, 0.25, 0.22
	Linux embed-certs-209025 5.10.57 #1 SMP Fri Dec 1 04:24:04 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [fa67f5071999cfc4d54edfa0e62dc8e8bca3808c5aff1ef8c3b6b160c30380f2] <==
	* W1206 20:05:47.803899       1 handler_proxy.go:93] no RequestInfo found in the context
	E1206 20:05:47.804003       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1206 20:05:47.804012       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1206 20:05:47.803914       1 handler_proxy.go:93] no RequestInfo found in the context
	E1206 20:05:47.804107       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1206 20:05:47.805383       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1206 20:06:46.658117       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1206 20:06:47.804616       1 handler_proxy.go:93] no RequestInfo found in the context
	E1206 20:06:47.804707       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1206 20:06:47.804720       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1206 20:06:47.805987       1 handler_proxy.go:93] no RequestInfo found in the context
	E1206 20:06:47.806080       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1206 20:06:47.806089       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1206 20:07:46.658312       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1206 20:08:46.658683       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1206 20:08:47.806045       1 handler_proxy.go:93] no RequestInfo found in the context
	W1206 20:08:47.806197       1 handler_proxy.go:93] no RequestInfo found in the context
	E1206 20:08:47.806245       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1206 20:08:47.806275       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1206 20:08:47.806466       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1206 20:08:47.808196       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1206 20:09:46.658081       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [845783e64fc221ad090b0cdd61ae73409ac477bbac6393a11553a57ca6cfd04e] <==
	* I1206 20:04:34.527115       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1206 20:05:04.153253       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1206 20:05:04.544145       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1206 20:05:34.160101       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1206 20:05:34.556054       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1206 20:06:04.166523       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1206 20:06:04.564953       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1206 20:06:34.173019       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1206 20:06:34.578184       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1206 20:07:02.149343       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="589.195µs"
	E1206 20:07:04.183520       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1206 20:07:04.586635       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1206 20:07:17.147875       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="353.351µs"
	E1206 20:07:34.190899       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1206 20:07:34.596194       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1206 20:08:04.197305       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1206 20:08:04.605641       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1206 20:08:34.203389       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1206 20:08:34.617649       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1206 20:09:04.211736       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1206 20:09:04.627624       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1206 20:09:34.217971       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1206 20:09:34.641083       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1206 20:10:04.225024       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1206 20:10:04.652282       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [5a3aaa502aefb75f5b5d755d86f8f904b25753fe2f35b086d51680ad1f49e319] <==
	* I1206 20:01:08.687741       1 server_others.go:69] "Using iptables proxy"
	I1206 20:01:08.735459       1 node.go:141] Successfully retrieved node IP: 192.168.50.164
	I1206 20:01:08.881600       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1206 20:01:08.881670       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1206 20:01:08.886354       1 server_others.go:152] "Using iptables Proxier"
	I1206 20:01:08.887812       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1206 20:01:08.888181       1 server.go:846] "Version info" version="v1.28.4"
	I1206 20:01:08.888411       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 20:01:08.890702       1 config.go:188] "Starting service config controller"
	I1206 20:01:08.898414       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1206 20:01:08.891889       1 config.go:315] "Starting node config controller"
	I1206 20:01:08.908276       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1206 20:01:08.909271       1 config.go:97] "Starting endpoint slice config controller"
	I1206 20:01:08.909392       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1206 20:01:08.998889       1 shared_informer.go:318] Caches are synced for service config
	I1206 20:01:09.008830       1 shared_informer.go:318] Caches are synced for node config
	I1206 20:01:09.009992       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [279722e0476001fe1aae961b651ca615f7634985041754008e4f24944d10c082] <==
	* W1206 20:00:47.712284       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1206 20:00:47.712374       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1206 20:00:47.713689       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1206 20:00:47.713832       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1206 20:00:47.740150       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1206 20:00:47.740204       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1206 20:00:47.767055       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1206 20:00:47.767112       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1206 20:00:47.804591       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1206 20:00:47.804658       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1206 20:00:47.819992       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1206 20:00:47.820115       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1206 20:00:47.867850       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1206 20:00:47.867971       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1206 20:00:47.903081       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1206 20:00:47.903169       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1206 20:00:47.937303       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1206 20:00:47.937422       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1206 20:00:47.981123       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1206 20:00:47.981247       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1206 20:00:48.136564       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1206 20:00:48.136633       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1206 20:00:48.143320       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1206 20:00:48.143383       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I1206 20:00:50.119029       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-12-06 19:55:36 UTC, ends at Wed 2023-12-06 20:10:13 UTC. --
	Dec 06 20:07:29 embed-certs-209025 kubelet[3835]: E1206 20:07:29.124956    3835 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-5qxxj" podUID="4eaddb4b-aec0-4cc7-b467-bb882bcba8a0"
	Dec 06 20:07:43 embed-certs-209025 kubelet[3835]: E1206 20:07:43.124323    3835 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-5qxxj" podUID="4eaddb4b-aec0-4cc7-b467-bb882bcba8a0"
	Dec 06 20:07:51 embed-certs-209025 kubelet[3835]: E1206 20:07:51.230282    3835 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 06 20:07:51 embed-certs-209025 kubelet[3835]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 06 20:07:51 embed-certs-209025 kubelet[3835]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 06 20:07:51 embed-certs-209025 kubelet[3835]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 06 20:07:54 embed-certs-209025 kubelet[3835]: E1206 20:07:54.124725    3835 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-5qxxj" podUID="4eaddb4b-aec0-4cc7-b467-bb882bcba8a0"
	Dec 06 20:08:08 embed-certs-209025 kubelet[3835]: E1206 20:08:08.124097    3835 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-5qxxj" podUID="4eaddb4b-aec0-4cc7-b467-bb882bcba8a0"
	Dec 06 20:08:22 embed-certs-209025 kubelet[3835]: E1206 20:08:22.124438    3835 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-5qxxj" podUID="4eaddb4b-aec0-4cc7-b467-bb882bcba8a0"
	Dec 06 20:08:34 embed-certs-209025 kubelet[3835]: E1206 20:08:34.124048    3835 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-5qxxj" podUID="4eaddb4b-aec0-4cc7-b467-bb882bcba8a0"
	Dec 06 20:08:49 embed-certs-209025 kubelet[3835]: E1206 20:08:49.124369    3835 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-5qxxj" podUID="4eaddb4b-aec0-4cc7-b467-bb882bcba8a0"
	Dec 06 20:08:51 embed-certs-209025 kubelet[3835]: E1206 20:08:51.231092    3835 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 06 20:08:51 embed-certs-209025 kubelet[3835]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 06 20:08:51 embed-certs-209025 kubelet[3835]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 06 20:08:51 embed-certs-209025 kubelet[3835]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 06 20:09:02 embed-certs-209025 kubelet[3835]: E1206 20:09:02.124069    3835 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-5qxxj" podUID="4eaddb4b-aec0-4cc7-b467-bb882bcba8a0"
	Dec 06 20:09:16 embed-certs-209025 kubelet[3835]: E1206 20:09:16.123977    3835 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-5qxxj" podUID="4eaddb4b-aec0-4cc7-b467-bb882bcba8a0"
	Dec 06 20:09:31 embed-certs-209025 kubelet[3835]: E1206 20:09:31.124703    3835 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-5qxxj" podUID="4eaddb4b-aec0-4cc7-b467-bb882bcba8a0"
	Dec 06 20:09:43 embed-certs-209025 kubelet[3835]: E1206 20:09:43.124466    3835 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-5qxxj" podUID="4eaddb4b-aec0-4cc7-b467-bb882bcba8a0"
	Dec 06 20:09:51 embed-certs-209025 kubelet[3835]: E1206 20:09:51.232513    3835 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 06 20:09:51 embed-certs-209025 kubelet[3835]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 06 20:09:51 embed-certs-209025 kubelet[3835]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 06 20:09:51 embed-certs-209025 kubelet[3835]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 06 20:09:56 embed-certs-209025 kubelet[3835]: E1206 20:09:56.125172    3835 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-5qxxj" podUID="4eaddb4b-aec0-4cc7-b467-bb882bcba8a0"
	Dec 06 20:10:08 embed-certs-209025 kubelet[3835]: E1206 20:10:08.123940    3835 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-5qxxj" podUID="4eaddb4b-aec0-4cc7-b467-bb882bcba8a0"
	
	* 
	* ==> storage-provisioner [ad375b57a7bfd4aeba26bc78c2535a0b637de33baa8344d21033aee93b66a963] <==
	* I1206 20:01:09.012743       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1206 20:01:09.026150       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1206 20:01:09.026245       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1206 20:01:09.040993       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1206 20:01:09.041535       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-209025_2f34e4c0-aa5e-4e2f-8fc1-c4caadcf7692!
	I1206 20:01:09.046204       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"44c60c63-e4a2-4de1-b8dd-99775d6e768d", APIVersion:"v1", ResourceVersion:"441", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-209025_2f34e4c0-aa5e-4e2f-8fc1-c4caadcf7692 became leader
	I1206 20:01:09.143178       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-209025_2f34e4c0-aa5e-4e2f-8fc1-c4caadcf7692!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-209025 -n embed-certs-209025
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-209025 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-5qxxj
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-209025 describe pod metrics-server-57f55c9bc5-5qxxj
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-209025 describe pod metrics-server-57f55c9bc5-5qxxj: exit status 1 (71.094488ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-5qxxj" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-209025 describe pod metrics-server-57f55c9bc5-5qxxj: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (543.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1206 20:01:27.860030   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/bridge-459609/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-989559 -n no-preload-989559
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-12-06 20:10:18.131484223 +0000 UTC m=+5392.215978030
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-989559 -n no-preload-989559
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-989559 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-989559 logs -n 25: (1.720390315s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-459609 sudo cat                              | bridge-459609                | jenkins | v1.32.0 | 06 Dec 23 19:46 UTC | 06 Dec 23 19:46 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-459609 sudo                                  | bridge-459609                | jenkins | v1.32.0 | 06 Dec 23 19:46 UTC | 06 Dec 23 19:46 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-459609 sudo                                  | bridge-459609                | jenkins | v1.32.0 | 06 Dec 23 19:46 UTC | 06 Dec 23 19:46 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-459609 sudo                                  | bridge-459609                | jenkins | v1.32.0 | 06 Dec 23 19:46 UTC | 06 Dec 23 19:46 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-459609 sudo find                             | bridge-459609                | jenkins | v1.32.0 | 06 Dec 23 19:46 UTC | 06 Dec 23 19:46 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-459609 sudo crio                             | bridge-459609                | jenkins | v1.32.0 | 06 Dec 23 19:46 UTC | 06 Dec 23 19:46 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-459609                                       | bridge-459609                | jenkins | v1.32.0 | 06 Dec 23 19:46 UTC | 06 Dec 23 19:46 UTC |
	| delete  | -p                                                     | disable-driver-mounts-730405 | jenkins | v1.32.0 | 06 Dec 23 19:46 UTC | 06 Dec 23 19:46 UTC |
	|         | disable-driver-mounts-730405                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-380424 | jenkins | v1.32.0 | 06 Dec 23 19:46 UTC | 06 Dec 23 19:48 UTC |
	|         | default-k8s-diff-port-380424                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-989559             | no-preload-989559            | jenkins | v1.32.0 | 06 Dec 23 19:47 UTC | 06 Dec 23 19:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-989559                                   | no-preload-989559            | jenkins | v1.32.0 | 06 Dec 23 19:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-448851        | old-k8s-version-448851       | jenkins | v1.32.0 | 06 Dec 23 19:47 UTC | 06 Dec 23 19:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-448851                              | old-k8s-version-448851       | jenkins | v1.32.0 | 06 Dec 23 19:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-380424  | default-k8s-diff-port-380424 | jenkins | v1.32.0 | 06 Dec 23 19:48 UTC | 06 Dec 23 19:48 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-380424 | jenkins | v1.32.0 | 06 Dec 23 19:48 UTC |                     |
	|         | default-k8s-diff-port-380424                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-209025            | embed-certs-209025           | jenkins | v1.32.0 | 06 Dec 23 19:48 UTC | 06 Dec 23 19:48 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-209025                                  | embed-certs-209025           | jenkins | v1.32.0 | 06 Dec 23 19:48 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-989559                  | no-preload-989559            | jenkins | v1.32.0 | 06 Dec 23 19:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-989559                                   | no-preload-989559            | jenkins | v1.32.0 | 06 Dec 23 19:50 UTC | 06 Dec 23 20:01 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.1                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-448851             | old-k8s-version-448851       | jenkins | v1.32.0 | 06 Dec 23 19:50 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-448851                              | old-k8s-version-448851       | jenkins | v1.32.0 | 06 Dec 23 19:50 UTC | 06 Dec 23 20:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-380424       | default-k8s-diff-port-380424 | jenkins | v1.32.0 | 06 Dec 23 19:50 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-209025                 | embed-certs-209025           | jenkins | v1.32.0 | 06 Dec 23 19:50 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-380424 | jenkins | v1.32.0 | 06 Dec 23 19:50 UTC | 06 Dec 23 20:00 UTC |
	|         | default-k8s-diff-port-380424                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-209025                                  | embed-certs-209025           | jenkins | v1.32.0 | 06 Dec 23 19:50 UTC | 06 Dec 23 20:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/06 19:50:49
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 19:50:49.512923  115591 out.go:296] Setting OutFile to fd 1 ...
	I1206 19:50:49.513070  115591 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 19:50:49.513079  115591 out.go:309] Setting ErrFile to fd 2...
	I1206 19:50:49.513084  115591 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 19:50:49.513305  115591 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17740-63652/.minikube/bin
	I1206 19:50:49.513900  115591 out.go:303] Setting JSON to false
	I1206 19:50:49.514822  115591 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":9200,"bootTime":1701883050,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 19:50:49.514886  115591 start.go:138] virtualization: kvm guest
	I1206 19:50:49.517831  115591 out.go:177] * [embed-certs-209025] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1206 19:50:49.519496  115591 notify.go:220] Checking for updates...
	I1206 19:50:49.519507  115591 out.go:177]   - MINIKUBE_LOCATION=17740
	I1206 19:50:49.521356  115591 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 19:50:49.523241  115591 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17740-63652/kubeconfig
	I1206 19:50:49.525016  115591 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17740-63652/.minikube
	I1206 19:50:49.526632  115591 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 19:50:49.528148  115591 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 19:50:49.530159  115591 config.go:182] Loaded profile config "embed-certs-209025": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 19:50:49.530586  115591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:50:49.530636  115591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:50:49.545128  115591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46579
	I1206 19:50:49.545881  115591 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:50:49.547345  115591 main.go:141] libmachine: Using API Version  1
	I1206 19:50:49.547375  115591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:50:49.547739  115591 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:50:49.547926  115591 main.go:141] libmachine: (embed-certs-209025) Calling .DriverName
	I1206 19:50:49.548144  115591 driver.go:392] Setting default libvirt URI to qemu:///system
	I1206 19:50:49.548458  115591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:50:49.548506  115591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:50:49.562767  115591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42919
	I1206 19:50:49.563139  115591 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:50:49.563567  115591 main.go:141] libmachine: Using API Version  1
	I1206 19:50:49.563588  115591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:50:49.563913  115591 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:50:49.564112  115591 main.go:141] libmachine: (embed-certs-209025) Calling .DriverName
	I1206 19:50:49.600267  115591 out.go:177] * Using the kvm2 driver based on existing profile
	I1206 19:50:49.601977  115591 start.go:298] selected driver: kvm2
	I1206 19:50:49.601996  115591 start.go:902] validating driver "kvm2" against &{Name:embed-certs-209025 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-209025 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.164 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 19:50:49.602089  115591 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 19:50:49.602812  115591 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 19:50:49.602891  115591 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17740-63652/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1206 19:50:49.617831  115591 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1206 19:50:49.618234  115591 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 19:50:49.618296  115591 cni.go:84] Creating CNI manager for ""
	I1206 19:50:49.618306  115591 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 19:50:49.618316  115591 start_flags.go:323] config:
	{Name:embed-certs-209025 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-209025 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.164 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikub
e-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 19:50:49.618468  115591 iso.go:125] acquiring lock: {Name:mk6e9c7dc90243dab7d2a6f322b4b6abe4dff6ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 19:50:49.620428  115591 out.go:177] * Starting control plane node embed-certs-209025 in cluster embed-certs-209025
	I1206 19:50:46.558601  115497 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1206 19:50:46.558636  115497 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1206 19:50:46.558644  115497 cache.go:56] Caching tarball of preloaded images
	I1206 19:50:46.558714  115497 preload.go:174] Found /home/jenkins/minikube-integration/17740-63652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1206 19:50:46.558724  115497 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1206 19:50:46.558837  115497 profile.go:148] Saving config to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/default-k8s-diff-port-380424/config.json ...
	I1206 19:50:46.559024  115497 start.go:365] acquiring machines lock for default-k8s-diff-port-380424: {Name:mk49ce640266d8c664a871ed4989f65c26b6fae1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1206 19:50:49.622242  115591 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1206 19:50:49.622298  115591 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1206 19:50:49.622320  115591 cache.go:56] Caching tarball of preloaded images
	I1206 19:50:49.622419  115591 preload.go:174] Found /home/jenkins/minikube-integration/17740-63652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1206 19:50:49.622431  115591 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1206 19:50:49.622525  115591 profile.go:148] Saving config to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/embed-certs-209025/config.json ...
	I1206 19:50:49.622798  115591 start.go:365] acquiring machines lock for embed-certs-209025: {Name:mk49ce640266d8c664a871ed4989f65c26b6fae1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1206 19:50:51.693503  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:50:54.765519  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:51:00.845535  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:51:03.917509  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:51:09.997591  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:51:13.069427  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:51:19.149482  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:51:22.221565  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:51:28.301531  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:51:31.373569  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:51:37.453523  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:51:40.525531  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:51:46.605538  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:51:49.677544  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:51:55.757544  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:51:58.829552  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:52:04.909569  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:52:07.981555  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:52:14.061549  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:52:17.133576  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:52:23.213558  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:52:26.285482  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:52:32.365550  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:52:35.437574  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:52:41.517473  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:52:44.589458  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:52:50.669534  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:52:53.741496  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:52:59.821528  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:53:02.893489  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:53:08.973534  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:53:12.045527  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:53:18.125473  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:53:21.197472  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:53:27.277533  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:53:30.349580  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:53:36.429514  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:53:39.501584  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:53:45.581524  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:53:48.653547  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:53:54.733543  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:53:57.805491  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:54:03.885571  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:54:06.957565  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:54:13.037470  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:54:16.109461  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:54:22.189477  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:54:25.261563  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:54:31.341534  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:54:34.413513  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:54:40.493530  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:54:43.497878  115217 start.go:369] acquired machines lock for "old-k8s-version-448851" in 4m25.369261381s
	I1206 19:54:43.497937  115217 start.go:96] Skipping create...Using existing machine configuration
	I1206 19:54:43.497949  115217 fix.go:54] fixHost starting: 
	I1206 19:54:43.498301  115217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:54:43.498331  115217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:54:43.513072  115217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33051
	I1206 19:54:43.513520  115217 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:54:43.514005  115217 main.go:141] libmachine: Using API Version  1
	I1206 19:54:43.514035  115217 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:54:43.514375  115217 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:54:43.514571  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .DriverName
	I1206 19:54:43.514716  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetState
	I1206 19:54:43.516245  115217 fix.go:102] recreateIfNeeded on old-k8s-version-448851: state=Stopped err=<nil>
	I1206 19:54:43.516266  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .DriverName
	W1206 19:54:43.516391  115217 fix.go:128] unexpected machine state, will restart: <nil>
	I1206 19:54:43.518413  115217 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-448851" ...
	I1206 19:54:43.495395  115078 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1206 19:54:43.495445  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHHostname
	I1206 19:54:43.497720  115078 machine.go:91] provisioned docker machine in 4m37.37101565s
	I1206 19:54:43.497766  115078 fix.go:56] fixHost completed within 4m37.395231745s
	I1206 19:54:43.497773  115078 start.go:83] releasing machines lock for "no-preload-989559", held for 4m37.395253694s
	W1206 19:54:43.497813  115078 start.go:694] error starting host: provision: host is not running
	W1206 19:54:43.497949  115078 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1206 19:54:43.497960  115078 start.go:709] Will try again in 5 seconds ...
	I1206 19:54:43.519752  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .Start
	I1206 19:54:43.519905  115217 main.go:141] libmachine: (old-k8s-version-448851) Ensuring networks are active...
	I1206 19:54:43.520785  115217 main.go:141] libmachine: (old-k8s-version-448851) Ensuring network default is active
	I1206 19:54:43.521155  115217 main.go:141] libmachine: (old-k8s-version-448851) Ensuring network mk-old-k8s-version-448851 is active
	I1206 19:54:43.521477  115217 main.go:141] libmachine: (old-k8s-version-448851) Getting domain xml...
	I1206 19:54:43.522123  115217 main.go:141] libmachine: (old-k8s-version-448851) Creating domain...
	I1206 19:54:44.758967  115217 main.go:141] libmachine: (old-k8s-version-448851) Waiting to get IP...
	I1206 19:54:44.759812  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:54:44.760194  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | unable to find current IP address of domain old-k8s-version-448851 in network mk-old-k8s-version-448851
	I1206 19:54:44.760255  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | I1206 19:54:44.760156  116186 retry.go:31] will retry after 298.997725ms: waiting for machine to come up
	I1206 19:54:45.061071  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:54:45.061521  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | unable to find current IP address of domain old-k8s-version-448851 in network mk-old-k8s-version-448851
	I1206 19:54:45.061545  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | I1206 19:54:45.061474  116186 retry.go:31] will retry after 338.263286ms: waiting for machine to come up
	I1206 19:54:45.401161  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:54:45.401614  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | unable to find current IP address of domain old-k8s-version-448851 in network mk-old-k8s-version-448851
	I1206 19:54:45.401641  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | I1206 19:54:45.401572  116186 retry.go:31] will retry after 468.987471ms: waiting for machine to come up
	I1206 19:54:45.872203  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:54:45.872644  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | unable to find current IP address of domain old-k8s-version-448851 in network mk-old-k8s-version-448851
	I1206 19:54:45.872675  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | I1206 19:54:45.872586  116186 retry.go:31] will retry after 447.252306ms: waiting for machine to come up
	I1206 19:54:46.321277  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:54:46.321583  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | unable to find current IP address of domain old-k8s-version-448851 in network mk-old-k8s-version-448851
	I1206 19:54:46.321609  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | I1206 19:54:46.321549  116186 retry.go:31] will retry after 591.206607ms: waiting for machine to come up
	I1206 19:54:46.913936  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:54:46.914351  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | unable to find current IP address of domain old-k8s-version-448851 in network mk-old-k8s-version-448851
	I1206 19:54:46.914412  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | I1206 19:54:46.914260  116186 retry.go:31] will retry after 888.979547ms: waiting for machine to come up
	I1206 19:54:47.805332  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:54:47.805783  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | unable to find current IP address of domain old-k8s-version-448851 in network mk-old-k8s-version-448851
	I1206 19:54:47.805814  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | I1206 19:54:47.805722  116186 retry.go:31] will retry after 1.088490978s: waiting for machine to come up
	I1206 19:54:48.499603  115078 start.go:365] acquiring machines lock for no-preload-989559: {Name:mk49ce640266d8c664a871ed4989f65c26b6fae1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1206 19:54:48.895892  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:54:48.896316  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | unable to find current IP address of domain old-k8s-version-448851 in network mk-old-k8s-version-448851
	I1206 19:54:48.896347  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | I1206 19:54:48.896249  116186 retry.go:31] will retry after 1.145932913s: waiting for machine to come up
	I1206 19:54:50.043740  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:54:50.044169  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | unable to find current IP address of domain old-k8s-version-448851 in network mk-old-k8s-version-448851
	I1206 19:54:50.044199  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | I1206 19:54:50.044136  116186 retry.go:31] will retry after 1.302468984s: waiting for machine to come up
	I1206 19:54:51.347696  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:54:51.348093  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | unable to find current IP address of domain old-k8s-version-448851 in network mk-old-k8s-version-448851
	I1206 19:54:51.348124  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | I1206 19:54:51.348039  116186 retry.go:31] will retry after 2.099836852s: waiting for machine to come up
	I1206 19:54:53.450166  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:54:53.450638  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | unable to find current IP address of domain old-k8s-version-448851 in network mk-old-k8s-version-448851
	I1206 19:54:53.450678  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | I1206 19:54:53.450566  116186 retry.go:31] will retry after 1.877757048s: waiting for machine to come up
	I1206 19:54:55.331257  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:54:55.331697  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | unable to find current IP address of domain old-k8s-version-448851 in network mk-old-k8s-version-448851
	I1206 19:54:55.331752  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | I1206 19:54:55.331671  116186 retry.go:31] will retry after 3.399849348s: waiting for machine to come up
	I1206 19:54:58.733325  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:54:58.733712  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | unable to find current IP address of domain old-k8s-version-448851 in network mk-old-k8s-version-448851
	I1206 19:54:58.733736  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | I1206 19:54:58.733664  116186 retry.go:31] will retry after 4.308323214s: waiting for machine to come up
	I1206 19:55:04.350333  115497 start.go:369] acquired machines lock for "default-k8s-diff-port-380424" in 4m17.791271724s
	I1206 19:55:04.350411  115497 start.go:96] Skipping create...Using existing machine configuration
	I1206 19:55:04.350426  115497 fix.go:54] fixHost starting: 
	I1206 19:55:04.350878  115497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:55:04.350927  115497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:55:04.367462  115497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36653
	I1206 19:55:04.367935  115497 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:55:04.368546  115497 main.go:141] libmachine: Using API Version  1
	I1206 19:55:04.368580  115497 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:55:04.368972  115497 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:55:04.369197  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .DriverName
	I1206 19:55:04.369417  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetState
	I1206 19:55:04.370940  115497 fix.go:102] recreateIfNeeded on default-k8s-diff-port-380424: state=Stopped err=<nil>
	I1206 19:55:04.370982  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .DriverName
	W1206 19:55:04.371135  115497 fix.go:128] unexpected machine state, will restart: <nil>
	I1206 19:55:04.373809  115497 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-380424" ...
	I1206 19:55:03.047055  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.047484  115217 main.go:141] libmachine: (old-k8s-version-448851) Found IP for machine: 192.168.61.33
	I1206 19:55:03.047516  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has current primary IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.047527  115217 main.go:141] libmachine: (old-k8s-version-448851) Reserving static IP address...
	I1206 19:55:03.048083  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "old-k8s-version-448851", mac: "52:54:00:91:ad:26", ip: "192.168.61.33"} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:03.048116  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | skip adding static IP to network mk-old-k8s-version-448851 - found existing host DHCP lease matching {name: "old-k8s-version-448851", mac: "52:54:00:91:ad:26", ip: "192.168.61.33"}
	I1206 19:55:03.048135  115217 main.go:141] libmachine: (old-k8s-version-448851) Reserved static IP address: 192.168.61.33
	I1206 19:55:03.048146  115217 main.go:141] libmachine: (old-k8s-version-448851) Waiting for SSH to be available...
	I1206 19:55:03.048158  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | Getting to WaitForSSH function...
	I1206 19:55:03.050347  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.050661  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:03.050682  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.050793  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | Using SSH client type: external
	I1206 19:55:03.050872  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | Using SSH private key: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/old-k8s-version-448851/id_rsa (-rw-------)
	I1206 19:55:03.050913  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.33 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17740-63652/.minikube/machines/old-k8s-version-448851/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1206 19:55:03.050935  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | About to run SSH command:
	I1206 19:55:03.050956  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | exit 0
	I1206 19:55:03.137326  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | SSH cmd err, output: <nil>: 
	I1206 19:55:03.137753  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetConfigRaw
	I1206 19:55:03.138415  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetIP
	I1206 19:55:03.140903  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.141314  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:03.141341  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.141671  115217 profile.go:148] Saving config to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/old-k8s-version-448851/config.json ...
	I1206 19:55:03.141899  115217 machine.go:88] provisioning docker machine ...
	I1206 19:55:03.141924  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .DriverName
	I1206 19:55:03.142133  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetMachineName
	I1206 19:55:03.142284  115217 buildroot.go:166] provisioning hostname "old-k8s-version-448851"
	I1206 19:55:03.142305  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetMachineName
	I1206 19:55:03.142511  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHHostname
	I1206 19:55:03.144778  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.145119  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:03.145144  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.145289  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHPort
	I1206 19:55:03.145451  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 19:55:03.145582  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 19:55:03.145705  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHUsername
	I1206 19:55:03.145829  115217 main.go:141] libmachine: Using SSH client type: native
	I1206 19:55:03.146319  115217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.33 22 <nil> <nil>}
	I1206 19:55:03.146343  115217 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-448851 && echo "old-k8s-version-448851" | sudo tee /etc/hostname
	I1206 19:55:03.270447  115217 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-448851
	
	I1206 19:55:03.270490  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHHostname
	I1206 19:55:03.273453  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.273769  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:03.273802  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.273957  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHPort
	I1206 19:55:03.274148  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 19:55:03.274326  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 19:55:03.274426  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHUsername
	I1206 19:55:03.274576  115217 main.go:141] libmachine: Using SSH client type: native
	I1206 19:55:03.274893  115217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.33 22 <nil> <nil>}
	I1206 19:55:03.274910  115217 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-448851' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-448851/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-448851' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 19:55:03.395200  115217 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1206 19:55:03.395232  115217 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17740-63652/.minikube CaCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17740-63652/.minikube}
	I1206 19:55:03.395281  115217 buildroot.go:174] setting up certificates
	I1206 19:55:03.395298  115217 provision.go:83] configureAuth start
	I1206 19:55:03.395320  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetMachineName
	I1206 19:55:03.395585  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetIP
	I1206 19:55:03.397989  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.398373  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:03.398405  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.398547  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHHostname
	I1206 19:55:03.400869  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.401196  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:03.401223  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.401369  115217 provision.go:138] copyHostCerts
	I1206 19:55:03.401492  115217 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem, removing ...
	I1206 19:55:03.401513  115217 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem
	I1206 19:55:03.401600  115217 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem (1082 bytes)
	I1206 19:55:03.401718  115217 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem, removing ...
	I1206 19:55:03.401730  115217 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem
	I1206 19:55:03.401778  115217 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem (1123 bytes)
	I1206 19:55:03.401857  115217 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem, removing ...
	I1206 19:55:03.401867  115217 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem
	I1206 19:55:03.401899  115217 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem (1679 bytes)
	I1206 19:55:03.401971  115217 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-448851 san=[192.168.61.33 192.168.61.33 localhost 127.0.0.1 minikube old-k8s-version-448851]
	I1206 19:55:03.655010  115217 provision.go:172] copyRemoteCerts
	I1206 19:55:03.655082  115217 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 19:55:03.655110  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHHostname
	I1206 19:55:03.657860  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.658301  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:03.658336  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.658529  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHPort
	I1206 19:55:03.658738  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 19:55:03.658914  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHUsername
	I1206 19:55:03.659068  115217 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/old-k8s-version-448851/id_rsa Username:docker}
	I1206 19:55:03.742021  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 19:55:03.765284  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1206 19:55:03.788562  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 19:55:03.811692  115217 provision.go:86] duration metric: configureAuth took 416.376347ms
	I1206 19:55:03.811722  115217 buildroot.go:189] setting minikube options for container-runtime
	I1206 19:55:03.811943  115217 config.go:182] Loaded profile config "old-k8s-version-448851": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1206 19:55:03.812058  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHHostname
	I1206 19:55:03.814518  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.814898  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:03.814934  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.815149  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHPort
	I1206 19:55:03.815371  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 19:55:03.815541  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 19:55:03.815663  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHUsername
	I1206 19:55:03.815787  115217 main.go:141] libmachine: Using SSH client type: native
	I1206 19:55:03.816094  115217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.33 22 <nil> <nil>}
	I1206 19:55:03.816121  115217 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 19:55:04.115752  115217 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 19:55:04.115780  115217 machine.go:91] provisioned docker machine in 973.864642ms
	I1206 19:55:04.115790  115217 start.go:300] post-start starting for "old-k8s-version-448851" (driver="kvm2")
	I1206 19:55:04.115802  115217 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 19:55:04.115825  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .DriverName
	I1206 19:55:04.116197  115217 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 19:55:04.116226  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHHostname
	I1206 19:55:04.119234  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:04.119559  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:04.119586  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:04.119801  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHPort
	I1206 19:55:04.120047  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 19:55:04.120228  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHUsername
	I1206 19:55:04.120391  115217 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/old-k8s-version-448851/id_rsa Username:docker}
	I1206 19:55:04.203195  115217 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 19:55:04.207210  115217 info.go:137] Remote host: Buildroot 2021.02.12
	I1206 19:55:04.207238  115217 filesync.go:126] Scanning /home/jenkins/minikube-integration/17740-63652/.minikube/addons for local assets ...
	I1206 19:55:04.207315  115217 filesync.go:126] Scanning /home/jenkins/minikube-integration/17740-63652/.minikube/files for local assets ...
	I1206 19:55:04.207392  115217 filesync.go:149] local asset: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem -> 708342.pem in /etc/ssl/certs
	I1206 19:55:04.207475  115217 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 19:55:04.215469  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem --> /etc/ssl/certs/708342.pem (1708 bytes)
	I1206 19:55:04.238407  115217 start.go:303] post-start completed in 122.598676ms
	I1206 19:55:04.238437  115217 fix.go:56] fixHost completed within 20.740486511s
	I1206 19:55:04.238467  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHHostname
	I1206 19:55:04.241147  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:04.241522  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:04.241558  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:04.241720  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHPort
	I1206 19:55:04.241992  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 19:55:04.242187  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 19:55:04.242346  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHUsername
	I1206 19:55:04.242488  115217 main.go:141] libmachine: Using SSH client type: native
	I1206 19:55:04.242801  115217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.33 22 <nil> <nil>}
	I1206 19:55:04.242813  115217 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1206 19:55:04.350154  115217 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701892504.298339573
	
	I1206 19:55:04.350177  115217 fix.go:206] guest clock: 1701892504.298339573
	I1206 19:55:04.350185  115217 fix.go:219] Guest: 2023-12-06 19:55:04.298339573 +0000 UTC Remote: 2023-12-06 19:55:04.238442081 +0000 UTC m=+286.264851054 (delta=59.897492ms)
	I1206 19:55:04.350206  115217 fix.go:190] guest clock delta is within tolerance: 59.897492ms
	I1206 19:55:04.350212  115217 start.go:83] releasing machines lock for "old-k8s-version-448851", held for 20.852295937s
	I1206 19:55:04.350240  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .DriverName
	I1206 19:55:04.350562  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetIP
	I1206 19:55:04.353070  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:04.353519  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:04.353547  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:04.353732  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .DriverName
	I1206 19:55:04.354331  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .DriverName
	I1206 19:55:04.354552  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .DriverName
	I1206 19:55:04.354641  115217 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 19:55:04.354689  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHHostname
	I1206 19:55:04.354815  115217 ssh_runner.go:195] Run: cat /version.json
	I1206 19:55:04.354844  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHHostname
	I1206 19:55:04.357316  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:04.357558  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:04.357703  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:04.357734  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:04.357841  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHPort
	I1206 19:55:04.358006  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:04.358031  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 19:55:04.358052  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:04.358161  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHPort
	I1206 19:55:04.358241  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHUsername
	I1206 19:55:04.358322  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 19:55:04.358448  115217 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/old-k8s-version-448851/id_rsa Username:docker}
	I1206 19:55:04.358575  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHUsername
	I1206 19:55:04.358734  115217 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/old-k8s-version-448851/id_rsa Username:docker}
	I1206 19:55:04.469402  115217 ssh_runner.go:195] Run: systemctl --version
	I1206 19:55:04.475231  115217 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 19:55:04.618312  115217 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 19:55:04.625482  115217 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 19:55:04.625557  115217 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 19:55:04.640251  115217 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1206 19:55:04.640281  115217 start.go:475] detecting cgroup driver to use...
	I1206 19:55:04.640368  115217 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 19:55:04.654153  115217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 19:55:04.666295  115217 docker.go:203] disabling cri-docker service (if available) ...
	I1206 19:55:04.666387  115217 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 19:55:04.678579  115217 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 19:55:04.692472  115217 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 19:55:04.793090  115217 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 19:55:04.909331  115217 docker.go:219] disabling docker service ...
	I1206 19:55:04.909399  115217 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 19:55:04.922479  115217 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 19:55:04.934122  115217 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 19:55:05.048844  115217 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 19:55:05.156415  115217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 19:55:05.172525  115217 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 19:55:05.190303  115217 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1206 19:55:05.190363  115217 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:55:05.199967  115217 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 19:55:05.200048  115217 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:55:05.209725  115217 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:55:05.218770  115217 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:55:05.227835  115217 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 19:55:05.237006  115217 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 19:55:05.244839  115217 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1206 19:55:05.244899  115217 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1206 19:55:05.256528  115217 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 19:55:05.266360  115217 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 19:55:05.387203  115217 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 19:55:05.555553  115217 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 19:55:05.555668  115217 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 19:55:05.564619  115217 start.go:543] Will wait 60s for crictl version
	I1206 19:55:05.564682  115217 ssh_runner.go:195] Run: which crictl
	I1206 19:55:05.568979  115217 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1206 19:55:05.611883  115217 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1206 19:55:05.611986  115217 ssh_runner.go:195] Run: crio --version
	I1206 19:55:05.666757  115217 ssh_runner.go:195] Run: crio --version
	I1206 19:55:05.725942  115217 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1206 19:55:04.375626  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .Start
	I1206 19:55:04.375819  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Ensuring networks are active...
	I1206 19:55:04.376548  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Ensuring network default is active
	I1206 19:55:04.376923  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Ensuring network mk-default-k8s-diff-port-380424 is active
	I1206 19:55:04.377416  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Getting domain xml...
	I1206 19:55:04.378003  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Creating domain...
	I1206 19:55:05.667493  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting to get IP...
	I1206 19:55:05.668629  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:05.669112  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | unable to find current IP address of domain default-k8s-diff-port-380424 in network mk-default-k8s-diff-port-380424
	I1206 19:55:05.669148  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | I1206 19:55:05.669064  116315 retry.go:31] will retry after 259.414087ms: waiting for machine to come up
	I1206 19:55:05.930773  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:05.931201  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | unable to find current IP address of domain default-k8s-diff-port-380424 in network mk-default-k8s-diff-port-380424
	I1206 19:55:05.931232  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | I1206 19:55:05.931129  116315 retry.go:31] will retry after 319.702286ms: waiting for machine to come up
	I1206 19:55:06.252911  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:06.253423  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | unable to find current IP address of domain default-k8s-diff-port-380424 in network mk-default-k8s-diff-port-380424
	I1206 19:55:06.253458  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | I1206 19:55:06.253363  116315 retry.go:31] will retry after 403.286071ms: waiting for machine to come up
	I1206 19:55:05.727444  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetIP
	I1206 19:55:05.730503  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:05.730864  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:05.730900  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:05.731151  115217 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1206 19:55:05.735826  115217 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 19:55:05.748254  115217 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1206 19:55:05.748312  115217 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 19:55:05.799380  115217 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1206 19:55:05.799468  115217 ssh_runner.go:195] Run: which lz4
	I1206 19:55:05.803715  115217 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1206 19:55:05.808059  115217 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1206 19:55:05.808093  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1206 19:55:07.624367  115217 crio.go:444] Took 1.820689 seconds to copy over tarball
	I1206 19:55:07.624452  115217 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1206 19:55:06.658075  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:06.658763  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | unable to find current IP address of domain default-k8s-diff-port-380424 in network mk-default-k8s-diff-port-380424
	I1206 19:55:06.658800  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | I1206 19:55:06.658710  116315 retry.go:31] will retry after 572.663186ms: waiting for machine to come up
	I1206 19:55:07.233562  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:07.233898  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | unable to find current IP address of domain default-k8s-diff-port-380424 in network mk-default-k8s-diff-port-380424
	I1206 19:55:07.233927  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | I1206 19:55:07.233861  116315 retry.go:31] will retry after 762.563485ms: waiting for machine to come up
	I1206 19:55:07.997980  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:07.998424  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | unable to find current IP address of domain default-k8s-diff-port-380424 in network mk-default-k8s-diff-port-380424
	I1206 19:55:07.998453  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | I1206 19:55:07.998368  116315 retry.go:31] will retry after 885.694635ms: waiting for machine to come up
	I1206 19:55:08.885521  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:08.885957  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | unable to find current IP address of domain default-k8s-diff-port-380424 in network mk-default-k8s-diff-port-380424
	I1206 19:55:08.885983  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | I1206 19:55:08.885918  116315 retry.go:31] will retry after 924.594214ms: waiting for machine to come up
	I1206 19:55:09.812796  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:09.813271  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | unable to find current IP address of domain default-k8s-diff-port-380424 in network mk-default-k8s-diff-port-380424
	I1206 19:55:09.813305  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | I1206 19:55:09.813205  116315 retry.go:31] will retry after 1.485258028s: waiting for machine to come up
	I1206 19:55:11.300830  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:11.301385  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | unable to find current IP address of domain default-k8s-diff-port-380424 in network mk-default-k8s-diff-port-380424
	I1206 19:55:11.301424  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | I1206 19:55:11.301315  116315 retry.go:31] will retry after 1.232055429s: waiting for machine to come up
	I1206 19:55:10.452537  115217 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.828052972s)
	I1206 19:55:10.452565  115217 crio.go:451] Took 2.828166 seconds to extract the tarball
	I1206 19:55:10.452574  115217 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1206 19:55:10.493620  115217 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 19:55:10.539181  115217 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1206 19:55:10.539218  115217 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1206 19:55:10.539312  115217 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1206 19:55:10.539318  115217 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 19:55:10.539358  115217 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1206 19:55:10.539364  115217 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1206 19:55:10.539515  115217 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1206 19:55:10.539529  115217 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1206 19:55:10.539331  115217 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1206 19:55:10.539572  115217 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1206 19:55:10.540875  115217 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1206 19:55:10.540888  115217 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1206 19:55:10.540931  115217 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1206 19:55:10.540936  115217 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1206 19:55:10.540879  115217 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1206 19:55:10.540875  115217 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1206 19:55:10.540880  115217 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1206 19:55:10.540879  115217 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 19:55:10.725027  115217 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1206 19:55:10.762761  115217 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1206 19:55:10.762810  115217 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1206 19:55:10.762862  115217 ssh_runner.go:195] Run: which crictl
	I1206 19:55:10.763731  115217 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 19:55:10.766312  115217 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1206 19:55:10.768181  115217 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1206 19:55:10.773115  115217 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1206 19:55:10.829543  115217 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1206 19:55:10.841186  115217 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1206 19:55:10.856309  115217 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1206 19:55:10.873212  115217 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1206 19:55:10.983390  115217 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1206 19:55:10.983444  115217 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1206 19:55:10.983463  115217 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1206 19:55:10.983498  115217 ssh_runner.go:195] Run: which crictl
	I1206 19:55:10.983510  115217 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1206 19:55:10.983530  115217 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1206 19:55:10.983564  115217 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1206 19:55:10.983628  115217 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1206 19:55:10.983663  115217 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1206 19:55:10.983672  115217 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1206 19:55:10.983700  115217 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1206 19:55:10.983712  115217 ssh_runner.go:195] Run: which crictl
	I1206 19:55:10.983567  115217 ssh_runner.go:195] Run: which crictl
	I1206 19:55:10.983730  115217 ssh_runner.go:195] Run: which crictl
	I1206 19:55:10.983802  115217 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1206 19:55:10.983829  115217 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1206 19:55:10.983861  115217 ssh_runner.go:195] Run: which crictl
	I1206 19:55:11.009102  115217 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1206 19:55:11.009135  115217 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1206 19:55:11.009152  115217 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1206 19:55:11.009211  115217 ssh_runner.go:195] Run: which crictl
	I1206 19:55:11.009254  115217 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1206 19:55:11.009273  115217 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1206 19:55:11.009307  115217 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1206 19:55:11.009342  115217 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1206 19:55:11.009355  115217 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1206 19:55:11.009388  115217 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1206 19:55:11.009390  115217 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1206 19:55:11.130238  115217 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1206 19:55:11.158336  115217 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1206 19:55:11.158375  115217 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1206 19:55:11.158431  115217 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1206 19:55:11.158438  115217 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1206 19:55:11.158507  115217 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1206 19:55:12.535831  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:12.536331  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | unable to find current IP address of domain default-k8s-diff-port-380424 in network mk-default-k8s-diff-port-380424
	I1206 19:55:12.536374  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | I1206 19:55:12.536253  116315 retry.go:31] will retry after 1.865303927s: waiting for machine to come up
	I1206 19:55:14.402935  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:14.403326  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | unable to find current IP address of domain default-k8s-diff-port-380424 in network mk-default-k8s-diff-port-380424
	I1206 19:55:14.403354  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | I1206 19:55:14.403268  116315 retry.go:31] will retry after 1.960994282s: waiting for machine to come up
	I1206 19:55:16.366289  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:16.366763  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | unable to find current IP address of domain default-k8s-diff-port-380424 in network mk-default-k8s-diff-port-380424
	I1206 19:55:16.366792  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | I1206 19:55:16.366689  116315 retry.go:31] will retry after 2.933451161s: waiting for machine to come up
	I1206 19:55:13.478881  115217 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0: (2.320421557s)
	I1206 19:55:13.478937  115217 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1206 19:55:13.478892  115217 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (2.469478111s)
	I1206 19:55:13.478983  115217 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1206 19:55:13.479043  115217 cache_images.go:92] LoadImages completed in 2.939808867s
	W1206 19:55:13.479149  115217 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0: no such file or directory
	I1206 19:55:13.479228  115217 ssh_runner.go:195] Run: crio config
	I1206 19:55:13.543270  115217 cni.go:84] Creating CNI manager for ""
	I1206 19:55:13.543302  115217 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 19:55:13.543328  115217 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1206 19:55:13.543355  115217 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.33 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-448851 NodeName:old-k8s-version-448851 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.33"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.33 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1206 19:55:13.543557  115217 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.33
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-448851"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.33
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.33"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-448851
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.61.33:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 19:55:13.543700  115217 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-448851 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.33
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-448851 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1206 19:55:13.543776  115217 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1206 19:55:13.554524  115217 binaries.go:44] Found k8s binaries, skipping transfer
	I1206 19:55:13.554611  115217 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 19:55:13.566752  115217 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I1206 19:55:13.586027  115217 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 19:55:13.603800  115217 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I1206 19:55:13.627098  115217 ssh_runner.go:195] Run: grep 192.168.61.33	control-plane.minikube.internal$ /etc/hosts
	I1206 19:55:13.632470  115217 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.33	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 19:55:13.651452  115217 certs.go:56] Setting up /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/old-k8s-version-448851 for IP: 192.168.61.33
	I1206 19:55:13.651507  115217 certs.go:190] acquiring lock for shared ca certs: {Name:mkf8fbf7b590617ef4dc6c3a4acb742ae26f89ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 19:55:13.651670  115217 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.key
	I1206 19:55:13.651748  115217 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.key
	I1206 19:55:13.651860  115217 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/old-k8s-version-448851/client.key
	I1206 19:55:13.651932  115217 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/old-k8s-version-448851/apiserver.key.efa8c2ad
	I1206 19:55:13.651994  115217 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/old-k8s-version-448851/proxy-client.key
	I1206 19:55:13.652142  115217 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834.pem (1338 bytes)
	W1206 19:55:13.652183  115217 certs.go:433] ignoring /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834_empty.pem, impossibly tiny 0 bytes
	I1206 19:55:13.652201  115217 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 19:55:13.652241  115217 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem (1082 bytes)
	I1206 19:55:13.652283  115217 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem (1123 bytes)
	I1206 19:55:13.652326  115217 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem (1679 bytes)
	I1206 19:55:13.652389  115217 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem (1708 bytes)
	I1206 19:55:13.653344  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/old-k8s-version-448851/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1206 19:55:13.687786  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/old-k8s-version-448851/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 19:55:13.723604  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/old-k8s-version-448851/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 19:55:13.756434  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/old-k8s-version-448851/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 19:55:13.789066  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 19:55:13.821087  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 19:55:13.849840  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 19:55:13.876520  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 19:55:13.901763  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem --> /usr/share/ca-certificates/708342.pem (1708 bytes)
	I1206 19:55:13.932106  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 19:55:13.961708  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834.pem --> /usr/share/ca-certificates/70834.pem (1338 bytes)
	I1206 19:55:13.991586  115217 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 19:55:14.009848  115217 ssh_runner.go:195] Run: openssl version
	I1206 19:55:14.017661  115217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/708342.pem && ln -fs /usr/share/ca-certificates/708342.pem /etc/ssl/certs/708342.pem"
	I1206 19:55:14.031103  115217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/708342.pem
	I1206 19:55:14.037142  115217 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  6 18:50 /usr/share/ca-certificates/708342.pem
	I1206 19:55:14.037212  115217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/708342.pem
	I1206 19:55:14.044737  115217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/708342.pem /etc/ssl/certs/3ec20f2e.0"
	I1206 19:55:14.058296  115217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1206 19:55:14.068591  115217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:55:14.073995  115217 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  6 18:41 /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:55:14.074067  115217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:55:14.079922  115217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1206 19:55:14.090541  115217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/70834.pem && ln -fs /usr/share/ca-certificates/70834.pem /etc/ssl/certs/70834.pem"
	I1206 19:55:14.100915  115217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/70834.pem
	I1206 19:55:14.106692  115217 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  6 18:50 /usr/share/ca-certificates/70834.pem
	I1206 19:55:14.106766  115217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/70834.pem
	I1206 19:55:14.112592  115217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/70834.pem /etc/ssl/certs/51391683.0"
	I1206 19:55:14.122630  115217 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1206 19:55:14.128544  115217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 19:55:14.136649  115217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 19:55:14.143060  115217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 19:55:14.151002  115217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 19:55:14.157202  115217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 19:55:14.163456  115217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 19:55:14.171607  115217 kubeadm.go:404] StartCluster: {Name:old-k8s-version-448851 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-448851 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.33 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 19:55:14.171720  115217 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 19:55:14.171771  115217 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 19:55:14.216630  115217 cri.go:89] found id: ""
	I1206 19:55:14.216712  115217 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 19:55:14.229800  115217 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1206 19:55:14.229832  115217 kubeadm.go:636] restartCluster start
	I1206 19:55:14.229889  115217 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 19:55:14.242347  115217 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:14.243973  115217 kubeconfig.go:92] found "old-k8s-version-448851" server: "https://192.168.61.33:8443"
	I1206 19:55:14.247781  115217 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 19:55:14.257060  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:14.257118  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:14.268619  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:14.268644  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:14.268692  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:14.279803  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:14.780509  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:14.780603  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:14.796116  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:15.280797  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:15.280910  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:15.296260  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:15.779895  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:15.780023  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:15.796115  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:16.280792  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:16.280884  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:16.297258  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:16.780884  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:16.781007  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:16.796430  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:17.279982  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:17.280088  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:17.291102  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:17.780721  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:17.780865  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:17.792253  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:19.302288  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:19.302717  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | unable to find current IP address of domain default-k8s-diff-port-380424 in network mk-default-k8s-diff-port-380424
	I1206 19:55:19.302744  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | I1206 19:55:19.302670  116315 retry.go:31] will retry after 3.226665023s: waiting for machine to come up
	I1206 19:55:18.280684  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:18.280777  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:18.292535  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:18.780650  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:18.780722  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:18.793872  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:19.280431  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:19.280507  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:19.292188  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:19.780793  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:19.780914  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:19.791873  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:20.280527  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:20.280637  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:20.291886  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:20.780810  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:20.780890  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:20.791837  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:21.280389  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:21.280479  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:21.291743  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:21.780252  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:21.780343  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:21.791452  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:22.280013  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:22.280120  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:22.291240  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:22.780451  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:22.780528  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:22.791668  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:23.690245  115591 start.go:369] acquired machines lock for "embed-certs-209025" in 4m34.06740814s
	I1206 19:55:23.690318  115591 start.go:96] Skipping create...Using existing machine configuration
	I1206 19:55:23.690327  115591 fix.go:54] fixHost starting: 
	I1206 19:55:23.690686  115591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:55:23.690728  115591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:55:23.706483  115591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35135
	I1206 19:55:23.706891  115591 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:55:23.707367  115591 main.go:141] libmachine: Using API Version  1
	I1206 19:55:23.707391  115591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:55:23.707744  115591 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:55:23.707925  115591 main.go:141] libmachine: (embed-certs-209025) Calling .DriverName
	I1206 19:55:23.708059  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetState
	I1206 19:55:23.709586  115591 fix.go:102] recreateIfNeeded on embed-certs-209025: state=Stopped err=<nil>
	I1206 19:55:23.709612  115591 main.go:141] libmachine: (embed-certs-209025) Calling .DriverName
	W1206 19:55:23.709803  115591 fix.go:128] unexpected machine state, will restart: <nil>
	I1206 19:55:23.712015  115591 out.go:177] * Restarting existing kvm2 VM for "embed-certs-209025" ...
	I1206 19:55:23.713472  115591 main.go:141] libmachine: (embed-certs-209025) Calling .Start
	I1206 19:55:23.713637  115591 main.go:141] libmachine: (embed-certs-209025) Ensuring networks are active...
	I1206 19:55:23.714335  115591 main.go:141] libmachine: (embed-certs-209025) Ensuring network default is active
	I1206 19:55:23.714639  115591 main.go:141] libmachine: (embed-certs-209025) Ensuring network mk-embed-certs-209025 is active
	I1206 19:55:23.714978  115591 main.go:141] libmachine: (embed-certs-209025) Getting domain xml...
	I1206 19:55:23.715617  115591 main.go:141] libmachine: (embed-certs-209025) Creating domain...
	I1206 19:55:22.530618  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.531092  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has current primary IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.531107  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Found IP for machine: 192.168.72.22
	I1206 19:55:22.531117  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Reserving static IP address...
	I1206 19:55:22.531437  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-380424", mac: "52:54:00:15:24:2b", ip: "192.168.72.22"} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:22.531465  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | skip adding static IP to network mk-default-k8s-diff-port-380424 - found existing host DHCP lease matching {name: "default-k8s-diff-port-380424", mac: "52:54:00:15:24:2b", ip: "192.168.72.22"}
	I1206 19:55:22.531485  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | Getting to WaitForSSH function...
	I1206 19:55:22.531496  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Reserved static IP address: 192.168.72.22
	I1206 19:55:22.531554  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for SSH to be available...
	I1206 19:55:22.533485  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.533729  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:22.533752  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.533853  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | Using SSH client type: external
	I1206 19:55:22.533880  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | Using SSH private key: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/default-k8s-diff-port-380424/id_rsa (-rw-------)
	I1206 19:55:22.533916  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.22 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17740-63652/.minikube/machines/default-k8s-diff-port-380424/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1206 19:55:22.533941  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | About to run SSH command:
	I1206 19:55:22.533957  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | exit 0
	I1206 19:55:22.620864  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | SSH cmd err, output: <nil>: 
	I1206 19:55:22.621194  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetConfigRaw
	I1206 19:55:22.621844  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetIP
	I1206 19:55:22.624194  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.624565  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:22.624599  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.624876  115497 profile.go:148] Saving config to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/default-k8s-diff-port-380424/config.json ...
	I1206 19:55:22.625062  115497 machine.go:88] provisioning docker machine ...
	I1206 19:55:22.625081  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .DriverName
	I1206 19:55:22.625310  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetMachineName
	I1206 19:55:22.625481  115497 buildroot.go:166] provisioning hostname "default-k8s-diff-port-380424"
	I1206 19:55:22.625502  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetMachineName
	I1206 19:55:22.625635  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHHostname
	I1206 19:55:22.627886  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.628227  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:22.628255  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.628352  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHPort
	I1206 19:55:22.628499  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 19:55:22.628658  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 19:55:22.628784  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHUsername
	I1206 19:55:22.628940  115497 main.go:141] libmachine: Using SSH client type: native
	I1206 19:55:22.629440  115497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.22 22 <nil> <nil>}
	I1206 19:55:22.629462  115497 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-380424 && echo "default-k8s-diff-port-380424" | sudo tee /etc/hostname
	I1206 19:55:22.753829  115497 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-380424
	
	I1206 19:55:22.753867  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHHostname
	I1206 19:55:22.756620  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.756958  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:22.757001  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.757129  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHPort
	I1206 19:55:22.757375  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 19:55:22.757558  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 19:55:22.757700  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHUsername
	I1206 19:55:22.757868  115497 main.go:141] libmachine: Using SSH client type: native
	I1206 19:55:22.758197  115497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.22 22 <nil> <nil>}
	I1206 19:55:22.758252  115497 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-380424' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-380424/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-380424' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 19:55:22.878138  115497 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1206 19:55:22.878175  115497 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17740-63652/.minikube CaCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17740-63652/.minikube}
	I1206 19:55:22.878202  115497 buildroot.go:174] setting up certificates
	I1206 19:55:22.878246  115497 provision.go:83] configureAuth start
	I1206 19:55:22.878259  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetMachineName
	I1206 19:55:22.878557  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetIP
	I1206 19:55:22.881145  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.881515  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:22.881547  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.881657  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHHostname
	I1206 19:55:22.883591  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.883943  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:22.883981  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.884062  115497 provision.go:138] copyHostCerts
	I1206 19:55:22.884122  115497 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem, removing ...
	I1206 19:55:22.884135  115497 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem
	I1206 19:55:22.884203  115497 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem (1082 bytes)
	I1206 19:55:22.884334  115497 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem, removing ...
	I1206 19:55:22.884346  115497 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem
	I1206 19:55:22.884375  115497 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem (1123 bytes)
	I1206 19:55:22.884446  115497 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem, removing ...
	I1206 19:55:22.884457  115497 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem
	I1206 19:55:22.884484  115497 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem (1679 bytes)
	I1206 19:55:22.884539  115497 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-380424 san=[192.168.72.22 192.168.72.22 localhost 127.0.0.1 minikube default-k8s-diff-port-380424]
	I1206 19:55:22.973559  115497 provision.go:172] copyRemoteCerts
	I1206 19:55:22.973627  115497 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 19:55:22.973660  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHHostname
	I1206 19:55:22.976374  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.976656  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:22.976695  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.976888  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHPort
	I1206 19:55:22.977068  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 19:55:22.977300  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHUsername
	I1206 19:55:22.977468  115497 sshutil.go:53] new ssh client: &{IP:192.168.72.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/default-k8s-diff-port-380424/id_rsa Username:docker}
	I1206 19:55:23.061925  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 19:55:23.085093  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1206 19:55:23.108283  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 19:55:23.131666  115497 provision.go:86] duration metric: configureAuth took 253.404471ms
	I1206 19:55:23.131701  115497 buildroot.go:189] setting minikube options for container-runtime
	I1206 19:55:23.131879  115497 config.go:182] Loaded profile config "default-k8s-diff-port-380424": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 19:55:23.131957  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHHostname
	I1206 19:55:23.134672  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:23.135033  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:23.135077  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:23.135214  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHPort
	I1206 19:55:23.135436  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 19:55:23.135622  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 19:55:23.135781  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHUsername
	I1206 19:55:23.135941  115497 main.go:141] libmachine: Using SSH client type: native
	I1206 19:55:23.136393  115497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.22 22 <nil> <nil>}
	I1206 19:55:23.136427  115497 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 19:55:23.445361  115497 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 19:55:23.445389  115497 machine.go:91] provisioned docker machine in 820.312346ms
	I1206 19:55:23.445404  115497 start.go:300] post-start starting for "default-k8s-diff-port-380424" (driver="kvm2")
	I1206 19:55:23.445418  115497 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 19:55:23.445457  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .DriverName
	I1206 19:55:23.445851  115497 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 19:55:23.445886  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHHostname
	I1206 19:55:23.448493  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:23.448851  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:23.448879  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:23.449021  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHPort
	I1206 19:55:23.449210  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 19:55:23.449408  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHUsername
	I1206 19:55:23.449562  115497 sshutil.go:53] new ssh client: &{IP:192.168.72.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/default-k8s-diff-port-380424/id_rsa Username:docker}
	I1206 19:55:23.535493  115497 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 19:55:23.539696  115497 info.go:137] Remote host: Buildroot 2021.02.12
	I1206 19:55:23.539718  115497 filesync.go:126] Scanning /home/jenkins/minikube-integration/17740-63652/.minikube/addons for local assets ...
	I1206 19:55:23.539780  115497 filesync.go:126] Scanning /home/jenkins/minikube-integration/17740-63652/.minikube/files for local assets ...
	I1206 19:55:23.539862  115497 filesync.go:149] local asset: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem -> 708342.pem in /etc/ssl/certs
	I1206 19:55:23.539968  115497 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 19:55:23.548629  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem --> /etc/ssl/certs/708342.pem (1708 bytes)
	I1206 19:55:23.572264  115497 start.go:303] post-start completed in 126.842848ms
	I1206 19:55:23.572287  115497 fix.go:56] fixHost completed within 19.221864403s
	I1206 19:55:23.572321  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHHostname
	I1206 19:55:23.575329  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:23.575695  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:23.575739  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:23.575890  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHPort
	I1206 19:55:23.576093  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 19:55:23.576272  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 19:55:23.576429  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHUsername
	I1206 19:55:23.576599  115497 main.go:141] libmachine: Using SSH client type: native
	I1206 19:55:23.577046  115497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.22 22 <nil> <nil>}
	I1206 19:55:23.577064  115497 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1206 19:55:23.690035  115497 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701892523.637580982
	
	I1206 19:55:23.690064  115497 fix.go:206] guest clock: 1701892523.637580982
	I1206 19:55:23.690084  115497 fix.go:219] Guest: 2023-12-06 19:55:23.637580982 +0000 UTC Remote: 2023-12-06 19:55:23.572291664 +0000 UTC m=+277.181979500 (delta=65.289318ms)
	I1206 19:55:23.690146  115497 fix.go:190] guest clock delta is within tolerance: 65.289318ms
	I1206 19:55:23.690159  115497 start.go:83] releasing machines lock for "default-k8s-diff-port-380424", held for 19.339778523s
	I1206 19:55:23.690192  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .DriverName
	I1206 19:55:23.690465  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetIP
	I1206 19:55:23.692996  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:23.693337  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:23.693369  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:23.693562  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .DriverName
	I1206 19:55:23.694057  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .DriverName
	I1206 19:55:23.694250  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .DriverName
	I1206 19:55:23.694336  115497 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 19:55:23.694390  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHHostname
	I1206 19:55:23.694463  115497 ssh_runner.go:195] Run: cat /version.json
	I1206 19:55:23.694486  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHHostname
	I1206 19:55:23.696938  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:23.697063  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:23.697363  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:23.697393  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:23.697473  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:23.697514  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHPort
	I1206 19:55:23.697593  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:23.697674  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 19:55:23.697675  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHPort
	I1206 19:55:23.697876  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHUsername
	I1206 19:55:23.697899  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 19:55:23.698044  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHUsername
	I1206 19:55:23.698038  115497 sshutil.go:53] new ssh client: &{IP:192.168.72.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/default-k8s-diff-port-380424/id_rsa Username:docker}
	I1206 19:55:23.698167  115497 sshutil.go:53] new ssh client: &{IP:192.168.72.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/default-k8s-diff-port-380424/id_rsa Username:docker}
	I1206 19:55:23.786973  115497 ssh_runner.go:195] Run: systemctl --version
	I1206 19:55:23.814262  115497 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 19:55:23.954235  115497 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 19:55:23.961434  115497 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 19:55:23.961523  115497 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 19:55:23.981459  115497 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1206 19:55:23.981488  115497 start.go:475] detecting cgroup driver to use...
	I1206 19:55:23.981550  115497 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 19:55:24.000294  115497 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 19:55:24.013738  115497 docker.go:203] disabling cri-docker service (if available) ...
	I1206 19:55:24.013799  115497 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 19:55:24.030844  115497 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 19:55:24.044583  115497 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 19:55:24.161979  115497 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 19:55:24.296507  115497 docker.go:219] disabling docker service ...
	I1206 19:55:24.296580  115497 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 19:55:24.311171  115497 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 19:55:24.323538  115497 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 19:55:24.440425  115497 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 19:55:24.570168  115497 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 19:55:24.583169  115497 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 19:55:24.600733  115497 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1206 19:55:24.600790  115497 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:55:24.610057  115497 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 19:55:24.610129  115497 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:55:24.621925  115497 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:55:24.631383  115497 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:55:24.640414  115497 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 19:55:24.649853  115497 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 19:55:24.657999  115497 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1206 19:55:24.658052  115497 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1206 19:55:24.672821  115497 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 19:55:24.681200  115497 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 19:55:24.812790  115497 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 19:55:24.989383  115497 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 19:55:24.989483  115497 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 19:55:24.995335  115497 start.go:543] Will wait 60s for crictl version
	I1206 19:55:24.995404  115497 ssh_runner.go:195] Run: which crictl
	I1206 19:55:24.999307  115497 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1206 19:55:25.038932  115497 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1206 19:55:25.039046  115497 ssh_runner.go:195] Run: crio --version
	I1206 19:55:25.085844  115497 ssh_runner.go:195] Run: crio --version
	I1206 19:55:25.148264  115497 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1206 19:55:25.149676  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetIP
	I1206 19:55:25.152759  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:25.153157  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:25.153201  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:25.153451  115497 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1206 19:55:25.157621  115497 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 19:55:25.173609  115497 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1206 19:55:25.173680  115497 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 19:55:25.223564  115497 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1206 19:55:25.223647  115497 ssh_runner.go:195] Run: which lz4
	I1206 19:55:25.228720  115497 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1206 19:55:25.234028  115497 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1206 19:55:25.234061  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1206 19:55:23.280317  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:23.280398  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:23.291959  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:23.780005  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:23.780086  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:23.794371  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:24.257148  115217 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1206 19:55:24.257182  115217 kubeadm.go:1135] stopping kube-system containers ...
	I1206 19:55:24.257196  115217 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1206 19:55:24.257291  115217 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 19:55:24.300759  115217 cri.go:89] found id: ""
	I1206 19:55:24.300832  115217 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1206 19:55:24.319509  115217 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 19:55:24.329215  115217 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 19:55:24.329310  115217 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 19:55:24.338150  115217 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1206 19:55:24.338187  115217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:55:24.490031  115217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:55:25.123737  115217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:55:25.359750  115217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:55:25.550542  115217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:55:25.697003  115217 api_server.go:52] waiting for apiserver process to appear ...
	I1206 19:55:25.697091  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:55:25.713836  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:55:26.231509  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:55:26.730965  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:55:27.231602  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:55:27.731612  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:55:27.763155  115217 api_server.go:72] duration metric: took 2.066152846s to wait for apiserver process to appear ...
	I1206 19:55:27.763181  115217 api_server.go:88] waiting for apiserver healthz status ...
	I1206 19:55:27.763200  115217 api_server.go:253] Checking apiserver healthz at https://192.168.61.33:8443/healthz ...
	I1206 19:55:25.055509  115591 main.go:141] libmachine: (embed-certs-209025) Waiting to get IP...
	I1206 19:55:25.056687  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:25.057138  115591 main.go:141] libmachine: (embed-certs-209025) DBG | unable to find current IP address of domain embed-certs-209025 in network mk-embed-certs-209025
	I1206 19:55:25.057192  115591 main.go:141] libmachine: (embed-certs-209025) DBG | I1206 19:55:25.057100  116938 retry.go:31] will retry after 304.168381ms: waiting for machine to come up
	I1206 19:55:25.363765  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:25.364265  115591 main.go:141] libmachine: (embed-certs-209025) DBG | unable to find current IP address of domain embed-certs-209025 in network mk-embed-certs-209025
	I1206 19:55:25.364404  115591 main.go:141] libmachine: (embed-certs-209025) DBG | I1206 19:55:25.364341  116938 retry.go:31] will retry after 351.729741ms: waiting for machine to come up
	I1206 19:55:25.718184  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:25.718746  115591 main.go:141] libmachine: (embed-certs-209025) DBG | unable to find current IP address of domain embed-certs-209025 in network mk-embed-certs-209025
	I1206 19:55:25.718774  115591 main.go:141] libmachine: (embed-certs-209025) DBG | I1206 19:55:25.718650  116938 retry.go:31] will retry after 340.321802ms: waiting for machine to come up
	I1206 19:55:26.060168  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:26.060796  115591 main.go:141] libmachine: (embed-certs-209025) DBG | unable to find current IP address of domain embed-certs-209025 in network mk-embed-certs-209025
	I1206 19:55:26.060843  115591 main.go:141] libmachine: (embed-certs-209025) DBG | I1206 19:55:26.060725  116938 retry.go:31] will retry after 422.434651ms: waiting for machine to come up
	I1206 19:55:26.484420  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:26.484967  115591 main.go:141] libmachine: (embed-certs-209025) DBG | unable to find current IP address of domain embed-certs-209025 in network mk-embed-certs-209025
	I1206 19:55:26.485003  115591 main.go:141] libmachine: (embed-certs-209025) DBG | I1206 19:55:26.484931  116938 retry.go:31] will retry after 584.854153ms: waiting for machine to come up
	I1206 19:55:27.071766  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:27.072298  115591 main.go:141] libmachine: (embed-certs-209025) DBG | unable to find current IP address of domain embed-certs-209025 in network mk-embed-certs-209025
	I1206 19:55:27.072325  115591 main.go:141] libmachine: (embed-certs-209025) DBG | I1206 19:55:27.072233  116938 retry.go:31] will retry after 710.482528ms: waiting for machine to come up
	I1206 19:55:27.784162  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:27.784660  115591 main.go:141] libmachine: (embed-certs-209025) DBG | unable to find current IP address of domain embed-certs-209025 in network mk-embed-certs-209025
	I1206 19:55:27.784695  115591 main.go:141] libmachine: (embed-certs-209025) DBG | I1206 19:55:27.784560  116938 retry.go:31] will retry after 754.279817ms: waiting for machine to come up
	I1206 19:55:28.540261  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:28.540788  115591 main.go:141] libmachine: (embed-certs-209025) DBG | unable to find current IP address of domain embed-certs-209025 in network mk-embed-certs-209025
	I1206 19:55:28.540818  115591 main.go:141] libmachine: (embed-certs-209025) DBG | I1206 19:55:28.540728  116938 retry.go:31] will retry after 1.359726156s: waiting for machine to come up
	I1206 19:55:27.194696  115497 crio.go:444] Took 1.966010 seconds to copy over tarball
	I1206 19:55:27.194774  115497 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1206 19:55:30.501183  115497 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.306375512s)
	I1206 19:55:30.501222  115497 crio.go:451] Took 3.306493 seconds to extract the tarball
	I1206 19:55:30.501249  115497 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1206 19:55:30.542574  115497 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 19:55:30.587381  115497 crio.go:496] all images are preloaded for cri-o runtime.
	I1206 19:55:30.587405  115497 cache_images.go:84] Images are preloaded, skipping loading
	I1206 19:55:30.587483  115497 ssh_runner.go:195] Run: crio config
	I1206 19:55:30.649117  115497 cni.go:84] Creating CNI manager for ""
	I1206 19:55:30.649140  115497 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 19:55:30.649163  115497 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1206 19:55:30.649191  115497 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.22 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-380424 NodeName:default-k8s-diff-port-380424 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.22"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.22 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 19:55:30.649383  115497 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.22
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-380424"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.22
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.22"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 19:55:30.649487  115497 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-380424 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.22
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-380424 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1206 19:55:30.649561  115497 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1206 19:55:30.659186  115497 binaries.go:44] Found k8s binaries, skipping transfer
	I1206 19:55:30.659297  115497 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 19:55:30.668534  115497 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (387 bytes)
	I1206 19:55:30.684815  115497 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 19:55:30.701801  115497 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2112 bytes)
	I1206 19:55:30.721756  115497 ssh_runner.go:195] Run: grep 192.168.72.22	control-plane.minikube.internal$ /etc/hosts
	I1206 19:55:30.726656  115497 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.22	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 19:55:30.738943  115497 certs.go:56] Setting up /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/default-k8s-diff-port-380424 for IP: 192.168.72.22
	I1206 19:55:30.738981  115497 certs.go:190] acquiring lock for shared ca certs: {Name:mkf8fbf7b590617ef4dc6c3a4acb742ae26f89ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 19:55:30.739159  115497 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.key
	I1206 19:55:30.739219  115497 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.key
	I1206 19:55:30.739322  115497 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/default-k8s-diff-port-380424/client.key
	I1206 19:55:30.739426  115497 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/default-k8s-diff-port-380424/apiserver.key.99d663cb
	I1206 19:55:30.739489  115497 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/default-k8s-diff-port-380424/proxy-client.key
	I1206 19:55:30.739629  115497 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834.pem (1338 bytes)
	W1206 19:55:30.739672  115497 certs.go:433] ignoring /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834_empty.pem, impossibly tiny 0 bytes
	I1206 19:55:30.739689  115497 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 19:55:30.739726  115497 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem (1082 bytes)
	I1206 19:55:30.739762  115497 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem (1123 bytes)
	I1206 19:55:30.739801  115497 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem (1679 bytes)
	I1206 19:55:30.739872  115497 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem (1708 bytes)
	I1206 19:55:30.740532  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/default-k8s-diff-port-380424/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1206 19:55:30.766689  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/default-k8s-diff-port-380424/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 19:55:30.792892  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/default-k8s-diff-port-380424/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 19:55:30.817640  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/default-k8s-diff-port-380424/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 19:55:30.842916  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 19:55:30.868057  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 19:55:30.893993  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 19:55:30.924631  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 19:55:30.953503  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem --> /usr/share/ca-certificates/708342.pem (1708 bytes)
	I1206 19:55:30.980162  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 19:55:31.007247  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834.pem --> /usr/share/ca-certificates/70834.pem (1338 bytes)
	I1206 19:55:31.034274  115497 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 19:55:31.054544  115497 ssh_runner.go:195] Run: openssl version
	I1206 19:55:31.062053  115497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1206 19:55:31.077159  115497 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:55:31.083640  115497 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  6 18:41 /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:55:31.083707  115497 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:55:31.091093  115497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1206 19:55:31.105305  115497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/70834.pem && ln -fs /usr/share/ca-certificates/70834.pem /etc/ssl/certs/70834.pem"
	I1206 19:55:31.117767  115497 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/70834.pem
	I1206 19:55:31.123703  115497 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  6 18:50 /usr/share/ca-certificates/70834.pem
	I1206 19:55:31.123798  115497 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/70834.pem
	I1206 19:55:31.131531  115497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/70834.pem /etc/ssl/certs/51391683.0"
	I1206 19:55:31.142449  115497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/708342.pem && ln -fs /usr/share/ca-certificates/708342.pem /etc/ssl/certs/708342.pem"
	I1206 19:55:31.157311  115497 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/708342.pem
	I1206 19:55:31.163707  115497 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  6 18:50 /usr/share/ca-certificates/708342.pem
	I1206 19:55:31.163783  115497 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/708342.pem
	I1206 19:55:31.170831  115497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/708342.pem /etc/ssl/certs/3ec20f2e.0"
	I1206 19:55:31.183300  115497 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1206 19:55:31.188165  115497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 19:55:31.194562  115497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 19:55:31.201769  115497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 19:55:31.209562  115497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 19:55:31.217346  115497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 19:55:31.225522  115497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 19:55:31.233755  115497 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-380424 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-380424 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.22 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extra
Disks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 19:55:31.233889  115497 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 19:55:31.233952  115497 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 19:55:31.278891  115497 cri.go:89] found id: ""
	I1206 19:55:31.278972  115497 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 19:55:31.291971  115497 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1206 19:55:31.291999  115497 kubeadm.go:636] restartCluster start
	I1206 19:55:31.292070  115497 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 19:55:31.304934  115497 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:31.306156  115497 kubeconfig.go:92] found "default-k8s-diff-port-380424" server: "https://192.168.72.22:8444"
	I1206 19:55:31.308710  115497 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 19:55:31.321910  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:31.321976  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:31.339075  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:31.339096  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:31.339143  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:31.354436  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:32.765826  115217 api_server.go:269] stopped: https://192.168.61.33:8443/healthz: Get "https://192.168.61.33:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1206 19:55:32.765895  115217 api_server.go:253] Checking apiserver healthz at https://192.168.61.33:8443/healthz ...
	I1206 19:55:29.902670  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:29.903123  115591 main.go:141] libmachine: (embed-certs-209025) DBG | unable to find current IP address of domain embed-certs-209025 in network mk-embed-certs-209025
	I1206 19:55:29.903152  115591 main.go:141] libmachine: (embed-certs-209025) DBG | I1206 19:55:29.903081  116938 retry.go:31] will retry after 1.188380941s: waiting for machine to come up
	I1206 19:55:31.092707  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:31.093278  115591 main.go:141] libmachine: (embed-certs-209025) DBG | unable to find current IP address of domain embed-certs-209025 in network mk-embed-certs-209025
	I1206 19:55:31.093311  115591 main.go:141] libmachine: (embed-certs-209025) DBG | I1206 19:55:31.093245  116938 retry.go:31] will retry after 1.854046475s: waiting for machine to come up
	I1206 19:55:32.948423  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:32.948866  115591 main.go:141] libmachine: (embed-certs-209025) DBG | unable to find current IP address of domain embed-certs-209025 in network mk-embed-certs-209025
	I1206 19:55:32.948891  115591 main.go:141] libmachine: (embed-certs-209025) DBG | I1206 19:55:32.948827  116938 retry.go:31] will retry after 2.868825903s: waiting for machine to come up
	I1206 19:55:34.066100  115217 api_server.go:279] https://192.168.61.33:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1206 19:55:34.066146  115217 api_server.go:103] status: https://192.168.61.33:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1206 19:55:34.566904  115217 api_server.go:253] Checking apiserver healthz at https://192.168.61.33:8443/healthz ...
	I1206 19:55:34.573643  115217 api_server.go:279] https://192.168.61.33:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1206 19:55:34.573675  115217 api_server.go:103] status: https://192.168.61.33:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1206 19:55:35.066235  115217 api_server.go:253] Checking apiserver healthz at https://192.168.61.33:8443/healthz ...
	I1206 19:55:35.076927  115217 api_server.go:279] https://192.168.61.33:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1206 19:55:35.076966  115217 api_server.go:103] status: https://192.168.61.33:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1206 19:55:35.566361  115217 api_server.go:253] Checking apiserver healthz at https://192.168.61.33:8443/healthz ...
	I1206 19:55:35.574853  115217 api_server.go:279] https://192.168.61.33:8443/healthz returned 200:
	ok
	I1206 19:55:35.585855  115217 api_server.go:141] control plane version: v1.16.0
	I1206 19:55:35.585895  115217 api_server.go:131] duration metric: took 7.822706447s to wait for apiserver health ...
	I1206 19:55:35.585908  115217 cni.go:84] Creating CNI manager for ""
	I1206 19:55:35.585917  115217 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 19:55:35.587984  115217 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1206 19:55:31.855148  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:31.855275  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:31.867628  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:32.355238  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:32.355330  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:32.368154  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:32.854710  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:32.854820  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:32.870926  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:33.355493  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:33.355586  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:33.371984  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:33.854511  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:33.854604  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:33.871260  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:34.354793  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:34.354897  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:34.371333  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:34.855487  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:34.855575  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:34.868348  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:35.354949  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:35.355026  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:35.367357  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:35.854910  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:35.855003  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:35.871382  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:36.354908  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:36.355047  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:36.371112  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:35.589529  115217 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1206 19:55:35.599454  115217 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1206 19:55:35.616803  115217 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 19:55:35.626793  115217 system_pods.go:59] 7 kube-system pods found
	I1206 19:55:35.626829  115217 system_pods.go:61] "coredns-5644d7b6d9-nrtk9" [447f7434-3f97-4e3f-9451-d9a54bff7ba1] Running
	I1206 19:55:35.626837  115217 system_pods.go:61] "etcd-old-k8s-version-448851" [77c1f822-788f-4f28-8f8e-54278d5d9e10] Running
	I1206 19:55:35.626843  115217 system_pods.go:61] "kube-apiserver-old-k8s-version-448851" [d3cf3d55-8862-4f81-ac61-99b202469859] Running
	I1206 19:55:35.626851  115217 system_pods.go:61] "kube-controller-manager-old-k8s-version-448851" [58ffb9bc-b5a3-4c64-a78f-da0011e6c277] Running
	I1206 19:55:35.626869  115217 system_pods.go:61] "kube-proxy-sw4qv" [6c08ab4a-447b-42e9-a617-ac35d66cf4ea] Running
	I1206 19:55:35.626879  115217 system_pods.go:61] "kube-scheduler-old-k8s-version-448851" [378ead75-3fd6-4cfd-a063-f2afc3a1cd12] Running
	I1206 19:55:35.626886  115217 system_pods.go:61] "storage-provisioner" [cce901c3-37d9-4ae2-ab9c-99bb7fda6d23] Running
	I1206 19:55:35.626901  115217 system_pods.go:74] duration metric: took 10.069819ms to wait for pod list to return data ...
	I1206 19:55:35.626910  115217 node_conditions.go:102] verifying NodePressure condition ...
	I1206 19:55:35.632164  115217 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1206 19:55:35.632240  115217 node_conditions.go:123] node cpu capacity is 2
	I1206 19:55:35.632256  115217 node_conditions.go:105] duration metric: took 5.340532ms to run NodePressure ...
	I1206 19:55:35.632282  115217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:55:35.925990  115217 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1206 19:55:35.935849  115217 retry.go:31] will retry after 256.122518ms: kubelet not initialised
	I1206 19:55:36.197872  115217 retry.go:31] will retry after 337.717759ms: kubelet not initialised
	I1206 19:55:36.541368  115217 retry.go:31] will retry after 784.037462ms: kubelet not initialised
	I1206 19:55:37.331284  115217 retry.go:31] will retry after 921.381118ms: kubelet not initialised
	I1206 19:55:35.819131  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:35.819759  115591 main.go:141] libmachine: (embed-certs-209025) DBG | unable to find current IP address of domain embed-certs-209025 in network mk-embed-certs-209025
	I1206 19:55:35.819793  115591 main.go:141] libmachine: (embed-certs-209025) DBG | I1206 19:55:35.819698  116938 retry.go:31] will retry after 2.281000862s: waiting for machine to come up
	I1206 19:55:38.103281  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:38.103807  115591 main.go:141] libmachine: (embed-certs-209025) DBG | unable to find current IP address of domain embed-certs-209025 in network mk-embed-certs-209025
	I1206 19:55:38.103845  115591 main.go:141] libmachine: (embed-certs-209025) DBG | I1206 19:55:38.103736  116938 retry.go:31] will retry after 3.076134377s: waiting for machine to come up
	I1206 19:55:36.855191  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:36.855309  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:36.872110  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:37.354562  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:37.354682  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:37.370156  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:37.854600  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:37.854726  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:37.870621  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:38.355289  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:38.355391  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:38.368595  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:38.855116  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:38.855218  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:38.868455  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:39.354955  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:39.355048  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:39.368875  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:39.854833  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:39.854928  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:39.866765  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:40.354989  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:40.355106  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:40.367526  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:40.854791  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:40.854873  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:40.866579  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:41.322422  115497 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1206 19:55:41.322456  115497 kubeadm.go:1135] stopping kube-system containers ...
	I1206 19:55:41.322472  115497 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1206 19:55:41.322548  115497 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 19:55:41.360234  115497 cri.go:89] found id: ""
	I1206 19:55:41.360301  115497 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1206 19:55:41.376968  115497 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 19:55:41.387639  115497 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 19:55:41.387694  115497 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 19:55:41.397586  115497 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1206 19:55:41.397617  115497 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:55:38.258758  115217 retry.go:31] will retry after 961.817778ms: kubelet not initialised
	I1206 19:55:39.225505  115217 retry.go:31] will retry after 1.751905914s: kubelet not initialised
	I1206 19:55:40.982344  115217 retry.go:31] will retry after 1.649102014s: kubelet not initialised
	I1206 19:55:42.639410  115217 retry.go:31] will retry after 3.317462401s: kubelet not initialised
	I1206 19:55:41.182443  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:41.182893  115591 main.go:141] libmachine: (embed-certs-209025) DBG | unable to find current IP address of domain embed-certs-209025 in network mk-embed-certs-209025
	I1206 19:55:41.182930  115591 main.go:141] libmachine: (embed-certs-209025) DBG | I1206 19:55:41.182837  116938 retry.go:31] will retry after 4.029797575s: waiting for machine to come up
	I1206 19:55:41.519134  115497 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:55:42.404075  115497 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:55:42.613308  115497 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:55:42.707533  115497 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:55:42.796041  115497 api_server.go:52] waiting for apiserver process to appear ...
	I1206 19:55:42.796139  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:55:42.816782  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:55:43.336582  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:55:43.836183  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:55:44.336879  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:55:44.836718  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:55:45.336249  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:55:45.363947  115497 api_server.go:72] duration metric: took 2.567911355s to wait for apiserver process to appear ...
	I1206 19:55:45.363968  115497 api_server.go:88] waiting for apiserver healthz status ...
	I1206 19:55:45.363984  115497 api_server.go:253] Checking apiserver healthz at https://192.168.72.22:8444/healthz ...
	I1206 19:55:46.486502  115078 start.go:369] acquired machines lock for "no-preload-989559" in 57.98684139s
	I1206 19:55:46.486560  115078 start.go:96] Skipping create...Using existing machine configuration
	I1206 19:55:46.486570  115078 fix.go:54] fixHost starting: 
	I1206 19:55:46.487006  115078 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:55:46.487052  115078 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:55:46.506170  115078 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32893
	I1206 19:55:46.506576  115078 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:55:46.507081  115078 main.go:141] libmachine: Using API Version  1
	I1206 19:55:46.507110  115078 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:55:46.507412  115078 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:55:46.507600  115078 main.go:141] libmachine: (no-preload-989559) Calling .DriverName
	I1206 19:55:46.508110  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetState
	I1206 19:55:46.509817  115078 fix.go:102] recreateIfNeeded on no-preload-989559: state=Stopped err=<nil>
	I1206 19:55:46.509843  115078 main.go:141] libmachine: (no-preload-989559) Calling .DriverName
	W1206 19:55:46.509988  115078 fix.go:128] unexpected machine state, will restart: <nil>
	I1206 19:55:46.512103  115078 out.go:177] * Restarting existing kvm2 VM for "no-preload-989559" ...
	I1206 19:55:45.214823  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.215271  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has current primary IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.215293  115591 main.go:141] libmachine: (embed-certs-209025) Found IP for machine: 192.168.50.164
	I1206 19:55:45.215341  115591 main.go:141] libmachine: (embed-certs-209025) Reserving static IP address...
	I1206 19:55:45.215738  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "embed-certs-209025", mac: "52:54:00:4d:27:5b", ip: "192.168.50.164"} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:45.215772  115591 main.go:141] libmachine: (embed-certs-209025) DBG | skip adding static IP to network mk-embed-certs-209025 - found existing host DHCP lease matching {name: "embed-certs-209025", mac: "52:54:00:4d:27:5b", ip: "192.168.50.164"}
	I1206 19:55:45.215787  115591 main.go:141] libmachine: (embed-certs-209025) Reserved static IP address: 192.168.50.164
	I1206 19:55:45.215805  115591 main.go:141] libmachine: (embed-certs-209025) Waiting for SSH to be available...
	I1206 19:55:45.215821  115591 main.go:141] libmachine: (embed-certs-209025) DBG | Getting to WaitForSSH function...
	I1206 19:55:45.217850  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.218192  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:45.218219  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.218370  115591 main.go:141] libmachine: (embed-certs-209025) DBG | Using SSH client type: external
	I1206 19:55:45.218404  115591 main.go:141] libmachine: (embed-certs-209025) DBG | Using SSH private key: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/embed-certs-209025/id_rsa (-rw-------)
	I1206 19:55:45.218438  115591 main.go:141] libmachine: (embed-certs-209025) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.164 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17740-63652/.minikube/machines/embed-certs-209025/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1206 19:55:45.218452  115591 main.go:141] libmachine: (embed-certs-209025) DBG | About to run SSH command:
	I1206 19:55:45.218475  115591 main.go:141] libmachine: (embed-certs-209025) DBG | exit 0
	I1206 19:55:45.309353  115591 main.go:141] libmachine: (embed-certs-209025) DBG | SSH cmd err, output: <nil>: 
	I1206 19:55:45.309758  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetConfigRaw
	I1206 19:55:45.310547  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetIP
	I1206 19:55:45.313019  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.313334  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:45.313369  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.313638  115591 profile.go:148] Saving config to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/embed-certs-209025/config.json ...
	I1206 19:55:45.313844  115591 machine.go:88] provisioning docker machine ...
	I1206 19:55:45.313870  115591 main.go:141] libmachine: (embed-certs-209025) Calling .DriverName
	I1206 19:55:45.314081  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetMachineName
	I1206 19:55:45.314264  115591 buildroot.go:166] provisioning hostname "embed-certs-209025"
	I1206 19:55:45.314298  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetMachineName
	I1206 19:55:45.314509  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHHostname
	I1206 19:55:45.316952  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.317361  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:45.317395  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.317640  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHPort
	I1206 19:55:45.317821  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 19:55:45.317954  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 19:55:45.318079  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHUsername
	I1206 19:55:45.318235  115591 main.go:141] libmachine: Using SSH client type: native
	I1206 19:55:45.318665  115591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.164 22 <nil> <nil>}
	I1206 19:55:45.318683  115591 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-209025 && echo "embed-certs-209025" | sudo tee /etc/hostname
	I1206 19:55:45.459071  115591 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-209025
	
	I1206 19:55:45.459107  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHHostname
	I1206 19:55:45.461953  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.462334  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:45.462362  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.462592  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHPort
	I1206 19:55:45.462814  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 19:55:45.463010  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 19:55:45.463162  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHUsername
	I1206 19:55:45.463353  115591 main.go:141] libmachine: Using SSH client type: native
	I1206 19:55:45.463887  115591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.164 22 <nil> <nil>}
	I1206 19:55:45.463916  115591 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-209025' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-209025/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-209025' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 19:55:45.597186  115591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1206 19:55:45.597220  115591 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17740-63652/.minikube CaCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17740-63652/.minikube}
	I1206 19:55:45.597253  115591 buildroot.go:174] setting up certificates
	I1206 19:55:45.597270  115591 provision.go:83] configureAuth start
	I1206 19:55:45.597288  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetMachineName
	I1206 19:55:45.597658  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetIP
	I1206 19:55:45.600582  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.600954  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:45.600983  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.601138  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHHostname
	I1206 19:55:45.603355  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.603746  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:45.603774  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.603942  115591 provision.go:138] copyHostCerts
	I1206 19:55:45.604012  115591 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem, removing ...
	I1206 19:55:45.604037  115591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem
	I1206 19:55:45.604113  115591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem (1082 bytes)
	I1206 19:55:45.604227  115591 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem, removing ...
	I1206 19:55:45.604243  115591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem
	I1206 19:55:45.604277  115591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem (1123 bytes)
	I1206 19:55:45.604353  115591 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem, removing ...
	I1206 19:55:45.604363  115591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem
	I1206 19:55:45.604390  115591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem (1679 bytes)
	I1206 19:55:45.604454  115591 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem org=jenkins.embed-certs-209025 san=[192.168.50.164 192.168.50.164 localhost 127.0.0.1 minikube embed-certs-209025]
	I1206 19:55:45.706944  115591 provision.go:172] copyRemoteCerts
	I1206 19:55:45.707028  115591 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 19:55:45.707069  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHHostname
	I1206 19:55:45.709985  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.710357  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:45.710398  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.710530  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHPort
	I1206 19:55:45.710738  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 19:55:45.710917  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHUsername
	I1206 19:55:45.711092  115591 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/embed-certs-209025/id_rsa Username:docker}
	I1206 19:55:45.807035  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 19:55:45.831480  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 19:55:45.855902  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1206 19:55:45.882797  115591 provision.go:86] duration metric: configureAuth took 285.508678ms
	I1206 19:55:45.882831  115591 buildroot.go:189] setting minikube options for container-runtime
	I1206 19:55:45.883074  115591 config.go:182] Loaded profile config "embed-certs-209025": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 19:55:45.883156  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHHostname
	I1206 19:55:45.886130  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.886576  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:45.886611  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.886825  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHPort
	I1206 19:55:45.887026  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 19:55:45.887198  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 19:55:45.887348  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHUsername
	I1206 19:55:45.887570  115591 main.go:141] libmachine: Using SSH client type: native
	I1206 19:55:45.887900  115591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.164 22 <nil> <nil>}
	I1206 19:55:45.887926  115591 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 19:55:46.217654  115591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 19:55:46.217732  115591 machine.go:91] provisioned docker machine in 903.869734ms
	I1206 19:55:46.217748  115591 start.go:300] post-start starting for "embed-certs-209025" (driver="kvm2")
	I1206 19:55:46.217762  115591 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 19:55:46.217788  115591 main.go:141] libmachine: (embed-certs-209025) Calling .DriverName
	I1206 19:55:46.218154  115591 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 19:55:46.218190  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHHostname
	I1206 19:55:46.220968  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:46.221345  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:46.221378  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:46.221557  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHPort
	I1206 19:55:46.221781  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 19:55:46.221951  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHUsername
	I1206 19:55:46.222093  115591 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/embed-certs-209025/id_rsa Username:docker}
	I1206 19:55:46.316289  115591 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 19:55:46.321014  115591 info.go:137] Remote host: Buildroot 2021.02.12
	I1206 19:55:46.321046  115591 filesync.go:126] Scanning /home/jenkins/minikube-integration/17740-63652/.minikube/addons for local assets ...
	I1206 19:55:46.321108  115591 filesync.go:126] Scanning /home/jenkins/minikube-integration/17740-63652/.minikube/files for local assets ...
	I1206 19:55:46.321183  115591 filesync.go:149] local asset: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem -> 708342.pem in /etc/ssl/certs
	I1206 19:55:46.321304  115591 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 19:55:46.331967  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem --> /etc/ssl/certs/708342.pem (1708 bytes)
	I1206 19:55:46.358983  115591 start.go:303] post-start completed in 141.214825ms
	I1206 19:55:46.359014  115591 fix.go:56] fixHost completed within 22.668688221s
	I1206 19:55:46.359037  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHHostname
	I1206 19:55:46.361846  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:46.362179  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:46.362212  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:46.362452  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHPort
	I1206 19:55:46.362704  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 19:55:46.362897  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 19:55:46.363073  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHUsername
	I1206 19:55:46.363310  115591 main.go:141] libmachine: Using SSH client type: native
	I1206 19:55:46.363803  115591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.164 22 <nil> <nil>}
	I1206 19:55:46.363823  115591 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1206 19:55:46.486321  115591 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701892546.422221924
	
	I1206 19:55:46.486350  115591 fix.go:206] guest clock: 1701892546.422221924
	I1206 19:55:46.486361  115591 fix.go:219] Guest: 2023-12-06 19:55:46.422221924 +0000 UTC Remote: 2023-12-06 19:55:46.359018 +0000 UTC m=+296.897065855 (delta=63.203924ms)
	I1206 19:55:46.486385  115591 fix.go:190] guest clock delta is within tolerance: 63.203924ms
	I1206 19:55:46.486391  115591 start.go:83] releasing machines lock for "embed-certs-209025", held for 22.796102432s
	I1206 19:55:46.486420  115591 main.go:141] libmachine: (embed-certs-209025) Calling .DriverName
	I1206 19:55:46.486727  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetIP
	I1206 19:55:46.489589  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:46.489890  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:46.489922  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:46.490079  115591 main.go:141] libmachine: (embed-certs-209025) Calling .DriverName
	I1206 19:55:46.490643  115591 main.go:141] libmachine: (embed-certs-209025) Calling .DriverName
	I1206 19:55:46.490836  115591 main.go:141] libmachine: (embed-certs-209025) Calling .DriverName
	I1206 19:55:46.490924  115591 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 19:55:46.490974  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHHostname
	I1206 19:55:46.491257  115591 ssh_runner.go:195] Run: cat /version.json
	I1206 19:55:46.491281  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHHostname
	I1206 19:55:46.494034  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:46.494326  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:46.494379  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:46.494405  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:46.494704  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:46.494704  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHPort
	I1206 19:55:46.494748  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:46.494900  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 19:55:46.494958  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHPort
	I1206 19:55:46.495019  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHUsername
	I1206 19:55:46.495144  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 19:55:46.495137  115591 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/embed-certs-209025/id_rsa Username:docker}
	I1206 19:55:46.495269  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHUsername
	I1206 19:55:46.495397  115591 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/embed-certs-209025/id_rsa Username:docker}
	I1206 19:55:46.587575  115591 ssh_runner.go:195] Run: systemctl --version
	I1206 19:55:46.614901  115591 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 19:55:46.764133  115591 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 19:55:46.771049  115591 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 19:55:46.771133  115591 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 19:55:46.786157  115591 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1206 19:55:46.786187  115591 start.go:475] detecting cgroup driver to use...
	I1206 19:55:46.786262  115591 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 19:55:46.801158  115591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 19:55:46.812881  115591 docker.go:203] disabling cri-docker service (if available) ...
	I1206 19:55:46.812948  115591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 19:55:46.825139  115591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 19:55:46.838071  115591 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 19:55:46.949823  115591 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 19:55:47.080490  115591 docker.go:219] disabling docker service ...
	I1206 19:55:47.080572  115591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 19:55:47.094773  115591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 19:55:47.107963  115591 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 19:55:47.233536  115591 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 19:55:47.360425  115591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 19:55:47.377453  115591 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 19:55:47.395959  115591 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1206 19:55:47.396026  115591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:55:47.406599  115591 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 19:55:47.406696  115591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:55:47.417082  115591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:55:47.427463  115591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:55:47.438246  115591 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 19:55:47.449910  115591 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 19:55:47.459620  115591 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1206 19:55:47.459675  115591 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1206 19:55:47.476230  115591 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 19:55:47.486777  115591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 19:55:47.597395  115591 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 19:55:47.809260  115591 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 19:55:47.809348  115591 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 19:55:47.815968  115591 start.go:543] Will wait 60s for crictl version
	I1206 19:55:47.816035  115591 ssh_runner.go:195] Run: which crictl
	I1206 19:55:47.820214  115591 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1206 19:55:47.869345  115591 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1206 19:55:47.869435  115591 ssh_runner.go:195] Run: crio --version
	I1206 19:55:47.923602  115591 ssh_runner.go:195] Run: crio --version
	I1206 19:55:47.983187  115591 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1206 19:55:45.963265  115217 retry.go:31] will retry after 4.496095904s: kubelet not initialised
	I1206 19:55:47.984954  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetIP
	I1206 19:55:47.988218  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:47.988742  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:47.988775  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:47.989031  115591 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1206 19:55:47.994471  115591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 19:55:48.008964  115591 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1206 19:55:48.009022  115591 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 19:55:48.056234  115591 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1206 19:55:48.056333  115591 ssh_runner.go:195] Run: which lz4
	I1206 19:55:48.061573  115591 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1206 19:55:48.066119  115591 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1206 19:55:48.066156  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1206 19:55:46.513897  115078 main.go:141] libmachine: (no-preload-989559) Calling .Start
	I1206 19:55:46.514072  115078 main.go:141] libmachine: (no-preload-989559) Ensuring networks are active...
	I1206 19:55:46.514830  115078 main.go:141] libmachine: (no-preload-989559) Ensuring network default is active
	I1206 19:55:46.515153  115078 main.go:141] libmachine: (no-preload-989559) Ensuring network mk-no-preload-989559 is active
	I1206 19:55:46.515533  115078 main.go:141] libmachine: (no-preload-989559) Getting domain xml...
	I1206 19:55:46.516251  115078 main.go:141] libmachine: (no-preload-989559) Creating domain...
	I1206 19:55:47.899847  115078 main.go:141] libmachine: (no-preload-989559) Waiting to get IP...
	I1206 19:55:47.900939  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:55:47.901513  115078 main.go:141] libmachine: (no-preload-989559) DBG | unable to find current IP address of domain no-preload-989559 in network mk-no-preload-989559
	I1206 19:55:47.901634  115078 main.go:141] libmachine: (no-preload-989559) DBG | I1206 19:55:47.901487  117094 retry.go:31] will retry after 244.343929ms: waiting for machine to come up
	I1206 19:55:48.148254  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:55:48.148888  115078 main.go:141] libmachine: (no-preload-989559) DBG | unable to find current IP address of domain no-preload-989559 in network mk-no-preload-989559
	I1206 19:55:48.148927  115078 main.go:141] libmachine: (no-preload-989559) DBG | I1206 19:55:48.148835  117094 retry.go:31] will retry after 258.755356ms: waiting for machine to come up
	I1206 19:55:48.409550  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:55:48.410401  115078 main.go:141] libmachine: (no-preload-989559) DBG | unable to find current IP address of domain no-preload-989559 in network mk-no-preload-989559
	I1206 19:55:48.410427  115078 main.go:141] libmachine: (no-preload-989559) DBG | I1206 19:55:48.410308  117094 retry.go:31] will retry after 321.790541ms: waiting for machine to come up
	I1206 19:55:48.734055  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:55:48.734744  115078 main.go:141] libmachine: (no-preload-989559) DBG | unable to find current IP address of domain no-preload-989559 in network mk-no-preload-989559
	I1206 19:55:48.734768  115078 main.go:141] libmachine: (no-preload-989559) DBG | I1206 19:55:48.734646  117094 retry.go:31] will retry after 464.789653ms: waiting for machine to come up
	I1206 19:55:49.201462  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:55:49.202032  115078 main.go:141] libmachine: (no-preload-989559) DBG | unable to find current IP address of domain no-preload-989559 in network mk-no-preload-989559
	I1206 19:55:49.202065  115078 main.go:141] libmachine: (no-preload-989559) DBG | I1206 19:55:49.201985  117094 retry.go:31] will retry after 541.238407ms: waiting for machine to come up
	I1206 19:55:49.744792  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:55:49.745432  115078 main.go:141] libmachine: (no-preload-989559) DBG | unable to find current IP address of domain no-preload-989559 in network mk-no-preload-989559
	I1206 19:55:49.745461  115078 main.go:141] libmachine: (no-preload-989559) DBG | I1206 19:55:49.745338  117094 retry.go:31] will retry after 791.407194ms: waiting for machine to come up
	I1206 19:55:50.538151  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:55:50.538857  115078 main.go:141] libmachine: (no-preload-989559) DBG | unable to find current IP address of domain no-preload-989559 in network mk-no-preload-989559
	I1206 19:55:50.538883  115078 main.go:141] libmachine: (no-preload-989559) DBG | I1206 19:55:50.538741  117094 retry.go:31] will retry after 1.11510814s: waiting for machine to come up
	I1206 19:55:49.730248  115497 api_server.go:279] https://192.168.72.22:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1206 19:55:49.730287  115497 api_server.go:103] status: https://192.168.72.22:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1206 19:55:49.730318  115497 api_server.go:253] Checking apiserver healthz at https://192.168.72.22:8444/healthz ...
	I1206 19:55:49.788747  115497 api_server.go:279] https://192.168.72.22:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1206 19:55:49.788796  115497 api_server.go:103] status: https://192.168.72.22:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1206 19:55:50.289144  115497 api_server.go:253] Checking apiserver healthz at https://192.168.72.22:8444/healthz ...
	I1206 19:55:50.301437  115497 api_server.go:279] https://192.168.72.22:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1206 19:55:50.301479  115497 api_server.go:103] status: https://192.168.72.22:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1206 19:55:50.789018  115497 api_server.go:253] Checking apiserver healthz at https://192.168.72.22:8444/healthz ...
	I1206 19:55:50.800325  115497 api_server.go:279] https://192.168.72.22:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1206 19:55:50.800374  115497 api_server.go:103] status: https://192.168.72.22:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1206 19:55:51.289899  115497 api_server.go:253] Checking apiserver healthz at https://192.168.72.22:8444/healthz ...
	I1206 19:55:51.297638  115497 api_server.go:279] https://192.168.72.22:8444/healthz returned 200:
	ok
	I1206 19:55:51.310738  115497 api_server.go:141] control plane version: v1.28.4
	I1206 19:55:51.310772  115497 api_server.go:131] duration metric: took 5.946796569s to wait for apiserver health ...
	I1206 19:55:51.310784  115497 cni.go:84] Creating CNI manager for ""
	I1206 19:55:51.310793  115497 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 19:55:51.312719  115497 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1206 19:55:51.314431  115497 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1206 19:55:51.335045  115497 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1206 19:55:51.365598  115497 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 19:55:51.381865  115497 system_pods.go:59] 8 kube-system pods found
	I1206 19:55:51.381914  115497 system_pods.go:61] "coredns-5dd5756b68-4rgxf" [2ae6daa5-430f-4f14-a40c-c29f4757fb06] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 19:55:51.381936  115497 system_pods.go:61] "etcd-default-k8s-diff-port-380424" [895b0cdf-86c9-4b14-a633-4b172471cd2c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1206 19:55:51.381947  115497 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-380424" [ccc042d4-cd4c-4769-adc6-99d792146d72] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1206 19:55:51.381963  115497 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-380424" [b3fbba6f-fa71-489e-81b0-0196bb019273] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 19:55:51.381972  115497 system_pods.go:61] "kube-proxy-9ftnp" [4389fff8-1b22-47a5-af97-35a4e5b6c2b3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1206 19:55:51.381981  115497 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-380424" [b53c666c-cc84-4dd3-b208-35d04113381c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1206 19:55:51.381997  115497 system_pods.go:61] "metrics-server-57f55c9bc5-7bblg" [3a6477d9-cb91-48cb-ba03-8b669db53841] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 19:55:51.382006  115497 system_pods.go:61] "storage-provisioner" [b8f06027-e37c-4c09-b361-4d70af65c991] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 19:55:51.382020  115497 system_pods.go:74] duration metric: took 16.393796ms to wait for pod list to return data ...
	I1206 19:55:51.382041  115497 node_conditions.go:102] verifying NodePressure condition ...
	I1206 19:55:51.389181  115497 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1206 19:55:51.389242  115497 node_conditions.go:123] node cpu capacity is 2
	I1206 19:55:51.389256  115497 node_conditions.go:105] duration metric: took 7.208817ms to run NodePressure ...
	I1206 19:55:51.389285  115497 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:55:50.466490  115217 retry.go:31] will retry after 11.434043258s: kubelet not initialised
	I1206 19:55:49.900059  115591 crio.go:444] Took 1.838540 seconds to copy over tarball
	I1206 19:55:49.900171  115591 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1206 19:55:53.471724  115591 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.571512743s)
	I1206 19:55:53.471757  115591 crio.go:451] Took 3.571659 seconds to extract the tarball
	I1206 19:55:53.471770  115591 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1206 19:55:53.522151  115591 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 19:55:53.578068  115591 crio.go:496] all images are preloaded for cri-o runtime.
	I1206 19:55:53.578167  115591 cache_images.go:84] Images are preloaded, skipping loading
	I1206 19:55:53.578285  115591 ssh_runner.go:195] Run: crio config
	I1206 19:55:53.650688  115591 cni.go:84] Creating CNI manager for ""
	I1206 19:55:53.650715  115591 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 19:55:53.650736  115591 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1206 19:55:53.650762  115591 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.164 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-209025 NodeName:embed-certs-209025 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.164"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.164 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 19:55:53.650938  115591 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.164
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-209025"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.164
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.164"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 19:55:53.651025  115591 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-209025 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.164
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-209025 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1206 19:55:53.651093  115591 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1206 19:55:53.663792  115591 binaries.go:44] Found k8s binaries, skipping transfer
	I1206 19:55:53.663869  115591 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 19:55:53.674126  115591 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1206 19:55:53.692175  115591 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 19:55:53.708842  115591 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1206 19:55:53.726141  115591 ssh_runner.go:195] Run: grep 192.168.50.164	control-plane.minikube.internal$ /etc/hosts
	I1206 19:55:53.730310  115591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.164	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 19:55:53.742456  115591 certs.go:56] Setting up /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/embed-certs-209025 for IP: 192.168.50.164
	I1206 19:55:53.742489  115591 certs.go:190] acquiring lock for shared ca certs: {Name:mkf8fbf7b590617ef4dc6c3a4acb742ae26f89ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 19:55:53.742712  115591 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.key
	I1206 19:55:53.742765  115591 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.key
	I1206 19:55:53.742841  115591 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/embed-certs-209025/client.key
	I1206 19:55:53.742898  115591 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/embed-certs-209025/apiserver.key.d84b90a2
	I1206 19:55:53.742941  115591 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/embed-certs-209025/proxy-client.key
	I1206 19:55:53.743053  115591 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834.pem (1338 bytes)
	W1206 19:55:53.743081  115591 certs.go:433] ignoring /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834_empty.pem, impossibly tiny 0 bytes
	I1206 19:55:53.743096  115591 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 19:55:53.743135  115591 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem (1082 bytes)
	I1206 19:55:53.743172  115591 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem (1123 bytes)
	I1206 19:55:53.743205  115591 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem (1679 bytes)
	I1206 19:55:53.743265  115591 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem (1708 bytes)
	I1206 19:55:53.743932  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/embed-certs-209025/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1206 19:55:53.770792  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/embed-certs-209025/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 19:55:53.795080  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/embed-certs-209025/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 19:55:53.820920  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/embed-certs-209025/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 19:55:53.849068  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 19:55:53.875210  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 19:55:53.900201  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 19:55:53.927067  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 19:55:53.952810  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 19:55:53.979374  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834.pem --> /usr/share/ca-certificates/70834.pem (1338 bytes)
	I1206 19:55:54.005013  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem --> /usr/share/ca-certificates/708342.pem (1708 bytes)
	I1206 19:55:54.028072  115591 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 19:55:54.047087  115591 ssh_runner.go:195] Run: openssl version
	I1206 19:55:54.052949  115591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/708342.pem && ln -fs /usr/share/ca-certificates/708342.pem /etc/ssl/certs/708342.pem"
	I1206 19:55:54.064662  115591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/708342.pem
	I1206 19:55:54.069695  115591 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  6 18:50 /usr/share/ca-certificates/708342.pem
	I1206 19:55:54.069767  115591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/708342.pem
	I1206 19:55:54.076520  115591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/708342.pem /etc/ssl/certs/3ec20f2e.0"
	I1206 19:55:54.088312  115591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1206 19:55:54.100303  115591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:55:54.105718  115591 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  6 18:41 /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:55:54.105787  115591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:55:54.111543  115591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1206 19:55:54.124094  115591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/70834.pem && ln -fs /usr/share/ca-certificates/70834.pem /etc/ssl/certs/70834.pem"
	I1206 19:55:54.137418  115591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/70834.pem
	I1206 19:55:54.142536  115591 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  6 18:50 /usr/share/ca-certificates/70834.pem
	I1206 19:55:54.142611  115591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/70834.pem
	I1206 19:55:54.148497  115591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/70834.pem /etc/ssl/certs/51391683.0"
	I1206 19:55:54.160909  115591 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1206 19:55:54.165739  115591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 19:55:54.171884  115591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 19:55:54.179765  115591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 19:55:54.187615  115591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 19:55:54.195156  115591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 19:55:54.203228  115591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 19:55:54.210119  115591 kubeadm.go:404] StartCluster: {Name:embed-certs-209025 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-209025 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.164 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 19:55:54.210251  115591 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 19:55:54.210308  115591 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 19:55:54.258252  115591 cri.go:89] found id: ""
	I1206 19:55:54.258347  115591 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 19:55:54.270699  115591 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1206 19:55:54.270724  115591 kubeadm.go:636] restartCluster start
	I1206 19:55:54.270785  115591 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 19:55:54.281833  115591 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:54.282964  115591 kubeconfig.go:92] found "embed-certs-209025" server: "https://192.168.50.164:8443"
	I1206 19:55:54.285394  115591 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 19:55:54.296437  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:55:54.296545  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:54.309685  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:54.309707  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:55:54.309774  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:54.322265  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:51.655238  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:55:51.655732  115078 main.go:141] libmachine: (no-preload-989559) DBG | unable to find current IP address of domain no-preload-989559 in network mk-no-preload-989559
	I1206 19:55:51.655776  115078 main.go:141] libmachine: (no-preload-989559) DBG | I1206 19:55:51.655642  117094 retry.go:31] will retry after 958.384892ms: waiting for machine to come up
	I1206 19:55:52.616005  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:55:52.616540  115078 main.go:141] libmachine: (no-preload-989559) DBG | unable to find current IP address of domain no-preload-989559 in network mk-no-preload-989559
	I1206 19:55:52.616583  115078 main.go:141] libmachine: (no-preload-989559) DBG | I1206 19:55:52.616471  117094 retry.go:31] will retry after 1.537571193s: waiting for machine to come up
	I1206 19:55:54.155949  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:55:54.156397  115078 main.go:141] libmachine: (no-preload-989559) DBG | unable to find current IP address of domain no-preload-989559 in network mk-no-preload-989559
	I1206 19:55:54.156429  115078 main.go:141] libmachine: (no-preload-989559) DBG | I1206 19:55:54.156344  117094 retry.go:31] will retry after 2.030397746s: waiting for machine to come up
	I1206 19:55:51.771991  115497 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1206 19:55:51.786960  115497 kubeadm.go:787] kubelet initialised
	I1206 19:55:51.787056  115497 kubeadm.go:788] duration metric: took 14.962005ms waiting for restarted kubelet to initialise ...
	I1206 19:55:51.787080  115497 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 19:55:51.799090  115497 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-4rgxf" in "kube-system" namespace to be "Ready" ...
	I1206 19:55:53.845695  115497 pod_ready.go:102] pod "coredns-5dd5756b68-4rgxf" in "kube-system" namespace has status "Ready":"False"
	I1206 19:55:55.850483  115497 pod_ready.go:102] pod "coredns-5dd5756b68-4rgxf" in "kube-system" namespace has status "Ready":"False"
	I1206 19:55:54.823014  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:55:54.823105  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:54.835793  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:55.323393  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:55:55.323491  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:55.337041  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:55.823330  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:55:55.823437  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:55.839489  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:56.323250  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:55:56.323356  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:56.340029  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:56.822585  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:55:56.822700  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:56.835752  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:57.323326  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:55:57.323413  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:57.339916  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:57.823386  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:55:57.823478  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:57.840112  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:58.322441  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:55:58.322557  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:58.335485  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:58.822575  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:55:58.822695  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:58.839495  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:59.323053  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:55:59.323129  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:59.336117  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:56.188549  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:55:56.189073  115078 main.go:141] libmachine: (no-preload-989559) DBG | unable to find current IP address of domain no-preload-989559 in network mk-no-preload-989559
	I1206 19:55:56.189105  115078 main.go:141] libmachine: (no-preload-989559) DBG | I1206 19:55:56.189026  117094 retry.go:31] will retry after 2.455387871s: waiting for machine to come up
	I1206 19:55:58.646361  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:55:58.646772  115078 main.go:141] libmachine: (no-preload-989559) DBG | unable to find current IP address of domain no-preload-989559 in network mk-no-preload-989559
	I1206 19:55:58.646804  115078 main.go:141] libmachine: (no-preload-989559) DBG | I1206 19:55:58.646710  117094 retry.go:31] will retry after 3.286246406s: waiting for machine to come up
	I1206 19:55:57.344443  115497 pod_ready.go:92] pod "coredns-5dd5756b68-4rgxf" in "kube-system" namespace has status "Ready":"True"
	I1206 19:55:57.344478  115497 pod_ready.go:81] duration metric: took 5.545343389s waiting for pod "coredns-5dd5756b68-4rgxf" in "kube-system" namespace to be "Ready" ...
	I1206 19:55:57.344492  115497 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 19:55:59.363596  115497 pod_ready.go:102] pod "etcd-default-k8s-diff-port-380424" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:01.363703  115497 pod_ready.go:102] pod "etcd-default-k8s-diff-port-380424" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:01.907869  115217 retry.go:31] will retry after 21.572905296s: kubelet not initialised
	I1206 19:55:59.823000  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:55:59.823148  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:59.836153  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:00.322534  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:56:00.322617  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:00.340369  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:00.822851  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:56:00.822947  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:00.836512  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:01.323083  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:56:01.323161  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:01.337092  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:01.822623  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:56:01.822761  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:01.836428  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:02.323125  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:56:02.323213  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:02.336617  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:02.823198  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:56:02.823287  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:02.835923  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:03.322426  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:56:03.322527  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:03.336495  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:03.822711  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:56:03.822803  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:03.836624  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:04.297216  115591 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1206 19:56:04.297278  115591 kubeadm.go:1135] stopping kube-system containers ...
	I1206 19:56:04.297295  115591 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1206 19:56:04.297393  115591 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 19:56:04.343930  115591 cri.go:89] found id: ""
	I1206 19:56:04.344015  115591 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1206 19:56:04.364785  115591 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 19:56:04.376251  115591 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 19:56:04.376320  115591 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 19:56:04.387749  115591 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1206 19:56:04.387779  115591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:56:04.511034  115591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:56:01.934204  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:01.934775  115078 main.go:141] libmachine: (no-preload-989559) DBG | unable to find current IP address of domain no-preload-989559 in network mk-no-preload-989559
	I1206 19:56:01.934798  115078 main.go:141] libmachine: (no-preload-989559) DBG | I1206 19:56:01.934724  117094 retry.go:31] will retry after 2.967009815s: waiting for machine to come up
	I1206 19:56:04.903296  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:04.903725  115078 main.go:141] libmachine: (no-preload-989559) DBG | unable to find current IP address of domain no-preload-989559 in network mk-no-preload-989559
	I1206 19:56:04.903747  115078 main.go:141] libmachine: (no-preload-989559) DBG | I1206 19:56:04.903692  117094 retry.go:31] will retry after 4.817836653s: waiting for machine to come up
	I1206 19:56:03.862804  115497 pod_ready.go:102] pod "etcd-default-k8s-diff-port-380424" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:04.373174  115497 pod_ready.go:92] pod "etcd-default-k8s-diff-port-380424" in "kube-system" namespace has status "Ready":"True"
	I1206 19:56:04.373209  115497 pod_ready.go:81] duration metric: took 7.028708302s waiting for pod "etcd-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:04.373222  115497 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:04.383300  115497 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-380424" in "kube-system" namespace has status "Ready":"True"
	I1206 19:56:04.383324  115497 pod_ready.go:81] duration metric: took 10.094356ms waiting for pod "kube-apiserver-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:04.383333  115497 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:04.390225  115497 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-380424" in "kube-system" namespace has status "Ready":"True"
	I1206 19:56:04.390254  115497 pod_ready.go:81] duration metric: took 6.909695ms waiting for pod "kube-controller-manager-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:04.390267  115497 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9ftnp" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:04.396713  115497 pod_ready.go:92] pod "kube-proxy-9ftnp" in "kube-system" namespace has status "Ready":"True"
	I1206 19:56:04.396753  115497 pod_ready.go:81] duration metric: took 6.477432ms waiting for pod "kube-proxy-9ftnp" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:04.396766  115497 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:04.407015  115497 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-380424" in "kube-system" namespace has status "Ready":"True"
	I1206 19:56:04.407042  115497 pod_ready.go:81] duration metric: took 10.266604ms waiting for pod "kube-scheduler-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:04.407056  115497 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:05.819075  115591 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.307992443s)
	I1206 19:56:05.819111  115591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:56:06.024824  115591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:56:06.120865  115591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:56:06.207869  115591 api_server.go:52] waiting for apiserver process to appear ...
	I1206 19:56:06.207959  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:06.221306  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:06.734164  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:07.234302  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:07.734130  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:08.233726  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:08.734073  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:08.762848  115591 api_server.go:72] duration metric: took 2.554978073s to wait for apiserver process to appear ...
	I1206 19:56:08.762881  115591 api_server.go:88] waiting for apiserver healthz status ...
	I1206 19:56:08.762903  115591 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8443/healthz ...
	I1206 19:56:09.723600  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:09.724078  115078 main.go:141] libmachine: (no-preload-989559) Found IP for machine: 192.168.39.5
	I1206 19:56:09.724107  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has current primary IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:09.724114  115078 main.go:141] libmachine: (no-preload-989559) Reserving static IP address...
	I1206 19:56:09.724466  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "no-preload-989559", mac: "52:54:00:1c:4b:ce", ip: "192.168.39.5"} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:09.724509  115078 main.go:141] libmachine: (no-preload-989559) DBG | skip adding static IP to network mk-no-preload-989559 - found existing host DHCP lease matching {name: "no-preload-989559", mac: "52:54:00:1c:4b:ce", ip: "192.168.39.5"}
	I1206 19:56:09.724526  115078 main.go:141] libmachine: (no-preload-989559) Reserved static IP address: 192.168.39.5
	I1206 19:56:09.724536  115078 main.go:141] libmachine: (no-preload-989559) Waiting for SSH to be available...
	I1206 19:56:09.724546  115078 main.go:141] libmachine: (no-preload-989559) DBG | Getting to WaitForSSH function...
	I1206 19:56:09.726831  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:09.727117  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:09.727149  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:09.727248  115078 main.go:141] libmachine: (no-preload-989559) DBG | Using SSH client type: external
	I1206 19:56:09.727277  115078 main.go:141] libmachine: (no-preload-989559) DBG | Using SSH private key: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/no-preload-989559/id_rsa (-rw-------)
	I1206 19:56:09.727306  115078 main.go:141] libmachine: (no-preload-989559) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.5 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17740-63652/.minikube/machines/no-preload-989559/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1206 19:56:09.727317  115078 main.go:141] libmachine: (no-preload-989559) DBG | About to run SSH command:
	I1206 19:56:09.727361  115078 main.go:141] libmachine: (no-preload-989559) DBG | exit 0
	I1206 19:56:09.866040  115078 main.go:141] libmachine: (no-preload-989559) DBG | SSH cmd err, output: <nil>: 
	I1206 19:56:09.866443  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetConfigRaw
	I1206 19:56:09.867193  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetIP
	I1206 19:56:09.869892  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:09.870335  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:09.870374  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:09.870612  115078 profile.go:148] Saving config to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/no-preload-989559/config.json ...
	I1206 19:56:09.870870  115078 machine.go:88] provisioning docker machine ...
	I1206 19:56:09.870895  115078 main.go:141] libmachine: (no-preload-989559) Calling .DriverName
	I1206 19:56:09.871120  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetMachineName
	I1206 19:56:09.871299  115078 buildroot.go:166] provisioning hostname "no-preload-989559"
	I1206 19:56:09.871320  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetMachineName
	I1206 19:56:09.871471  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHHostname
	I1206 19:56:09.874146  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:09.874514  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:09.874554  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:09.874741  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHPort
	I1206 19:56:09.874943  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:09.875114  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:09.875258  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHUsername
	I1206 19:56:09.875412  115078 main.go:141] libmachine: Using SSH client type: native
	I1206 19:56:09.875921  115078 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I1206 19:56:09.875942  115078 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-989559 && echo "no-preload-989559" | sudo tee /etc/hostname
	I1206 19:56:10.017205  115078 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-989559
	
	I1206 19:56:10.017259  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHHostname
	I1206 19:56:10.020397  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:10.020843  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:10.020873  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:10.021040  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHPort
	I1206 19:56:10.021287  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:10.021450  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:10.021578  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHUsername
	I1206 19:56:10.021773  115078 main.go:141] libmachine: Using SSH client type: native
	I1206 19:56:10.022227  115078 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I1206 19:56:10.022255  115078 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-989559' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-989559/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-989559' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 19:56:10.160934  115078 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1206 19:56:10.161020  115078 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17740-63652/.minikube CaCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17740-63652/.minikube}
	I1206 19:56:10.161056  115078 buildroot.go:174] setting up certificates
	I1206 19:56:10.161072  115078 provision.go:83] configureAuth start
	I1206 19:56:10.161086  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetMachineName
	I1206 19:56:10.161464  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetIP
	I1206 19:56:10.164558  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:10.164956  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:10.165007  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:10.165246  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHHostname
	I1206 19:56:10.167911  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:10.168352  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:10.168412  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:10.168529  115078 provision.go:138] copyHostCerts
	I1206 19:56:10.168589  115078 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem, removing ...
	I1206 19:56:10.168612  115078 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem
	I1206 19:56:10.168675  115078 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem (1082 bytes)
	I1206 19:56:10.168796  115078 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem, removing ...
	I1206 19:56:10.168811  115078 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem
	I1206 19:56:10.168844  115078 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem (1123 bytes)
	I1206 19:56:10.168923  115078 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem, removing ...
	I1206 19:56:10.168962  115078 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem
	I1206 19:56:10.168990  115078 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem (1679 bytes)
	I1206 19:56:10.169062  115078 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem org=jenkins.no-preload-989559 san=[192.168.39.5 192.168.39.5 localhost 127.0.0.1 minikube no-preload-989559]
	I1206 19:56:10.266595  115078 provision.go:172] copyRemoteCerts
	I1206 19:56:10.266665  115078 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 19:56:10.266693  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHHostname
	I1206 19:56:10.269388  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:10.269786  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:10.269813  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:10.269987  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHPort
	I1206 19:56:10.270226  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:10.270390  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHUsername
	I1206 19:56:10.270536  115078 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/no-preload-989559/id_rsa Username:docker}
	I1206 19:56:10.362922  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 19:56:10.388165  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1206 19:56:10.412473  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 19:56:10.436804  115078 provision.go:86] duration metric: configureAuth took 275.714086ms
	I1206 19:56:10.436840  115078 buildroot.go:189] setting minikube options for container-runtime
	I1206 19:56:10.437076  115078 config.go:182] Loaded profile config "no-preload-989559": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.1
	I1206 19:56:10.437156  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHHostname
	I1206 19:56:10.439999  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:10.440419  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:10.440461  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:10.440567  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHPort
	I1206 19:56:10.440813  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:10.441006  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:10.441213  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHUsername
	I1206 19:56:10.441393  115078 main.go:141] libmachine: Using SSH client type: native
	I1206 19:56:10.441827  115078 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I1206 19:56:10.441844  115078 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 19:56:10.766695  115078 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 19:56:10.766726  115078 machine.go:91] provisioned docker machine in 895.840237ms
	I1206 19:56:10.766739  115078 start.go:300] post-start starting for "no-preload-989559" (driver="kvm2")
	I1206 19:56:10.766759  115078 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 19:56:10.766780  115078 main.go:141] libmachine: (no-preload-989559) Calling .DriverName
	I1206 19:56:10.767134  115078 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 19:56:10.767175  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHHostname
	I1206 19:56:10.770309  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:10.770704  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:10.770733  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:10.770881  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHPort
	I1206 19:56:10.771110  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:10.771247  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHUsername
	I1206 19:56:10.771414  115078 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/no-preload-989559/id_rsa Username:docker}
	I1206 19:56:10.869486  115078 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 19:56:10.874406  115078 info.go:137] Remote host: Buildroot 2021.02.12
	I1206 19:56:10.874433  115078 filesync.go:126] Scanning /home/jenkins/minikube-integration/17740-63652/.minikube/addons for local assets ...
	I1206 19:56:10.874502  115078 filesync.go:126] Scanning /home/jenkins/minikube-integration/17740-63652/.minikube/files for local assets ...
	I1206 19:56:10.874584  115078 filesync.go:149] local asset: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem -> 708342.pem in /etc/ssl/certs
	I1206 19:56:10.874684  115078 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 19:56:10.885837  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem --> /etc/ssl/certs/708342.pem (1708 bytes)
	I1206 19:56:10.910379  115078 start.go:303] post-start completed in 143.622076ms
	I1206 19:56:10.910406  115078 fix.go:56] fixHost completed within 24.423837205s
	I1206 19:56:10.910430  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHHostname
	I1206 19:56:10.913414  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:10.913887  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:10.913924  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:10.914062  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHPort
	I1206 19:56:10.914276  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:10.914430  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:10.914575  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHUsername
	I1206 19:56:10.914741  115078 main.go:141] libmachine: Using SSH client type: native
	I1206 19:56:10.915078  115078 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I1206 19:56:10.915096  115078 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1206 19:56:06.672320  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:09.170277  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:11.173418  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:11.046393  115078 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701892571.030057611
	
	I1206 19:56:11.046418  115078 fix.go:206] guest clock: 1701892571.030057611
	I1206 19:56:11.046427  115078 fix.go:219] Guest: 2023-12-06 19:56:11.030057611 +0000 UTC Remote: 2023-12-06 19:56:10.910410702 +0000 UTC m=+364.968252500 (delta=119.646909ms)
	I1206 19:56:11.046452  115078 fix.go:190] guest clock delta is within tolerance: 119.646909ms
	I1206 19:56:11.046460  115078 start.go:83] releasing machines lock for "no-preload-989559", held for 24.559924375s
	I1206 19:56:11.046485  115078 main.go:141] libmachine: (no-preload-989559) Calling .DriverName
	I1206 19:56:11.046751  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetIP
	I1206 19:56:11.049522  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:11.049918  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:11.049958  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:11.050160  115078 main.go:141] libmachine: (no-preload-989559) Calling .DriverName
	I1206 19:56:11.050715  115078 main.go:141] libmachine: (no-preload-989559) Calling .DriverName
	I1206 19:56:11.050932  115078 main.go:141] libmachine: (no-preload-989559) Calling .DriverName
	I1206 19:56:11.051010  115078 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 19:56:11.051063  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHHostname
	I1206 19:56:11.051201  115078 ssh_runner.go:195] Run: cat /version.json
	I1206 19:56:11.051234  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHHostname
	I1206 19:56:11.054142  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:11.054342  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:11.054556  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:11.054587  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:11.054711  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHPort
	I1206 19:56:11.054925  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:11.054930  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:11.054950  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:11.055054  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHPort
	I1206 19:56:11.055147  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHUsername
	I1206 19:56:11.055316  115078 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/no-preload-989559/id_rsa Username:docker}
	I1206 19:56:11.055338  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:11.055483  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHUsername
	I1206 19:56:11.055605  115078 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/no-preload-989559/id_rsa Username:docker}
	I1206 19:56:11.180256  115078 ssh_runner.go:195] Run: systemctl --version
	I1206 19:56:11.186702  115078 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 19:56:11.339386  115078 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 19:56:11.346262  115078 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 19:56:11.346364  115078 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 19:56:11.362865  115078 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1206 19:56:11.362902  115078 start.go:475] detecting cgroup driver to use...
	I1206 19:56:11.362988  115078 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 19:56:11.383636  115078 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 19:56:11.397157  115078 docker.go:203] disabling cri-docker service (if available) ...
	I1206 19:56:11.397264  115078 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 19:56:11.411338  115078 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 19:56:11.425762  115078 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 19:56:11.560730  115078 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 19:56:11.708633  115078 docker.go:219] disabling docker service ...
	I1206 19:56:11.708713  115078 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 19:56:11.723172  115078 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 19:56:11.737032  115078 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 19:56:11.851037  115078 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 19:56:11.969321  115078 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 19:56:11.982745  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 19:56:12.003130  115078 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1206 19:56:12.003215  115078 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:56:12.013345  115078 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 19:56:12.013428  115078 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:56:12.023765  115078 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:56:12.034114  115078 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:56:12.044159  115078 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 19:56:12.054135  115078 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 19:56:12.062781  115078 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1206 19:56:12.062861  115078 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1206 19:56:12.076322  115078 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 19:56:12.085924  115078 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 19:56:12.216360  115078 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 19:56:12.409482  115078 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 19:56:12.409550  115078 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 19:56:12.417063  115078 start.go:543] Will wait 60s for crictl version
	I1206 19:56:12.417135  115078 ssh_runner.go:195] Run: which crictl
	I1206 19:56:12.422177  115078 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1206 19:56:12.474340  115078 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1206 19:56:12.474449  115078 ssh_runner.go:195] Run: crio --version
	I1206 19:56:12.538091  115078 ssh_runner.go:195] Run: crio --version
	I1206 19:56:12.604444  115078 out.go:177] * Preparing Kubernetes v1.29.0-rc.1 on CRI-O 1.24.1 ...
	I1206 19:56:12.144887  115591 api_server.go:279] https://192.168.50.164:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1206 19:56:12.144921  115591 api_server.go:103] status: https://192.168.50.164:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1206 19:56:12.144936  115591 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8443/healthz ...
	I1206 19:56:12.179318  115591 api_server.go:279] https://192.168.50.164:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1206 19:56:12.179366  115591 api_server.go:103] status: https://192.168.50.164:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1206 19:56:12.679803  115591 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8443/healthz ...
	I1206 19:56:12.694412  115591 api_server.go:279] https://192.168.50.164:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1206 19:56:12.694449  115591 api_server.go:103] status: https://192.168.50.164:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1206 19:56:13.179503  115591 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8443/healthz ...
	I1206 19:56:13.193118  115591 api_server.go:279] https://192.168.50.164:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1206 19:56:13.193161  115591 api_server.go:103] status: https://192.168.50.164:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1206 19:56:13.679759  115591 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8443/healthz ...
	I1206 19:56:13.685603  115591 api_server.go:279] https://192.168.50.164:8443/healthz returned 200:
	ok
	I1206 19:56:13.694792  115591 api_server.go:141] control plane version: v1.28.4
	I1206 19:56:13.694831  115591 api_server.go:131] duration metric: took 4.931941572s to wait for apiserver health ...
	I1206 19:56:13.694843  115591 cni.go:84] Creating CNI manager for ""
	I1206 19:56:13.694852  115591 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 19:56:13.697042  115591 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1206 19:56:13.698653  115591 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1206 19:56:13.712991  115591 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1206 19:56:13.734001  115591 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 19:56:13.761962  115591 system_pods.go:59] 8 kube-system pods found
	I1206 19:56:13.762001  115591 system_pods.go:61] "coredns-5dd5756b68-cpst4" [e7d8324e-8468-470c-b532-1f09ee805bab] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 19:56:13.762022  115591 system_pods.go:61] "etcd-embed-certs-209025" [eeb81149-8e43-4efe-b977-e8f84c7a7c57] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1206 19:56:13.762032  115591 system_pods.go:61] "kube-apiserver-embed-certs-209025" [b64e228d-4921-4e35-b80c-343f8519076e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1206 19:56:13.762041  115591 system_pods.go:61] "kube-controller-manager-embed-certs-209025" [2206d849-0724-42c9-b5c4-4d84c3cafce4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 19:56:13.762053  115591 system_pods.go:61] "kube-proxy-pt8nj" [b7cffe6a-4401-40e0-8056-68452e15b57c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1206 19:56:13.762068  115591 system_pods.go:61] "kube-scheduler-embed-certs-209025" [88ae7a94-a1bc-463a-9253-5f308ec1755e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1206 19:56:13.762077  115591 system_pods.go:61] "metrics-server-57f55c9bc5-dr9k8" [0dbe18a4-d30d-4882-b188-b0d1f1b1d711] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 19:56:13.762092  115591 system_pods.go:61] "storage-provisioner" [afebf144-9062-4b43-a491-9eecd5ab6c10] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 19:56:13.762109  115591 system_pods.go:74] duration metric: took 28.078588ms to wait for pod list to return data ...
	I1206 19:56:13.762120  115591 node_conditions.go:102] verifying NodePressure condition ...
	I1206 19:56:13.773614  115591 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1206 19:56:13.773646  115591 node_conditions.go:123] node cpu capacity is 2
	I1206 19:56:13.773657  115591 node_conditions.go:105] duration metric: took 11.528993ms to run NodePressure ...
	I1206 19:56:13.773678  115591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:56:14.157761  115591 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1206 19:56:14.169588  115591 kubeadm.go:787] kubelet initialised
	I1206 19:56:14.169632  115591 kubeadm.go:788] duration metric: took 11.756226ms waiting for restarted kubelet to initialise ...
	I1206 19:56:14.169644  115591 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 19:56:14.186031  115591 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-cpst4" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:14.211717  115591 pod_ready.go:97] node "embed-certs-209025" hosting pod "coredns-5dd5756b68-cpst4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-209025" has status "Ready":"False"
	I1206 19:56:14.211747  115591 pod_ready.go:81] duration metric: took 25.681607ms waiting for pod "coredns-5dd5756b68-cpst4" in "kube-system" namespace to be "Ready" ...
	E1206 19:56:14.211759  115591 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-209025" hosting pod "coredns-5dd5756b68-cpst4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-209025" has status "Ready":"False"
	I1206 19:56:14.211769  115591 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:14.219369  115591 pod_ready.go:97] node "embed-certs-209025" hosting pod "etcd-embed-certs-209025" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-209025" has status "Ready":"False"
	I1206 19:56:14.219396  115591 pod_ready.go:81] duration metric: took 7.594898ms waiting for pod "etcd-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	E1206 19:56:14.219408  115591 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-209025" hosting pod "etcd-embed-certs-209025" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-209025" has status "Ready":"False"
	I1206 19:56:14.219425  115591 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:14.233417  115591 pod_ready.go:97] node "embed-certs-209025" hosting pod "kube-apiserver-embed-certs-209025" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-209025" has status "Ready":"False"
	I1206 19:56:14.233513  115591 pod_ready.go:81] duration metric: took 14.073312ms waiting for pod "kube-apiserver-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	E1206 19:56:14.233535  115591 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-209025" hosting pod "kube-apiserver-embed-certs-209025" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-209025" has status "Ready":"False"
	I1206 19:56:14.233546  115591 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:14.244480  115591 pod_ready.go:97] node "embed-certs-209025" hosting pod "kube-controller-manager-embed-certs-209025" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-209025" has status "Ready":"False"
	I1206 19:56:14.244516  115591 pod_ready.go:81] duration metric: took 10.958431ms waiting for pod "kube-controller-manager-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	E1206 19:56:14.244530  115591 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-209025" hosting pod "kube-controller-manager-embed-certs-209025" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-209025" has status "Ready":"False"
	I1206 19:56:14.244537  115591 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-pt8nj" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:12.606102  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetIP
	I1206 19:56:12.609040  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:12.609395  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:12.609436  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:12.609665  115078 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1206 19:56:12.615279  115078 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 19:56:12.629571  115078 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime crio
	I1206 19:56:12.629641  115078 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 19:56:12.674728  115078 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.1". assuming images are not preloaded.
	I1206 19:56:12.674763  115078 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.1 registry.k8s.io/kube-controller-manager:v1.29.0-rc.1 registry.k8s.io/kube-scheduler:v1.29.0-rc.1 registry.k8s.io/kube-proxy:v1.29.0-rc.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1206 19:56:12.674870  115078 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 19:56:12.674886  115078 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1206 19:56:12.674910  115078 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I1206 19:56:12.674923  115078 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1206 19:56:12.674965  115078 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1206 19:56:12.674885  115078 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1206 19:56:12.674998  115078 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I1206 19:56:12.674889  115078 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1206 19:56:12.676510  115078 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 19:56:12.676539  115078 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1206 19:56:12.676563  115078 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1206 19:56:12.676576  115078 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1206 19:56:12.676511  115078 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I1206 19:56:12.676599  115078 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I1206 19:56:12.676624  115078 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1206 19:56:12.676642  115078 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1206 19:56:12.862606  115078 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1206 19:56:12.882993  115078 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I1206 19:56:12.884387  115078 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I1206 19:56:12.900149  115078 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 19:56:12.909389  115078 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1206 19:56:12.916391  115078 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1206 19:56:12.924669  115078 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1206 19:56:12.946885  115078 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1206 19:56:13.028628  115078 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I1206 19:56:13.028685  115078 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I1206 19:56:13.028741  115078 ssh_runner.go:195] Run: which crictl
	I1206 19:56:13.095076  115078 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I1206 19:56:13.095139  115078 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I1206 19:56:13.095289  115078 ssh_runner.go:195] Run: which crictl
	I1206 19:56:13.136956  115078 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.1" does not exist at hash "b953575c86991f5317a5f4b7e3f43c8a43d771ca400d96ba473f43c9a1ab3542" in container runtime
	I1206 19:56:13.137003  115078 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1206 19:56:13.137074  115078 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 19:56:13.137130  115078 ssh_runner.go:195] Run: which crictl
	I1206 19:56:13.137005  115078 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1206 19:56:13.137268  115078 ssh_runner.go:195] Run: which crictl
	I1206 19:56:13.146913  115078 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.1" does not exist at hash "b7918a1eaa37ee8f677a10278009aa059854630785fbb7306140641494aeec09" in container runtime
	I1206 19:56:13.146970  115078 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1206 19:56:13.147024  115078 ssh_runner.go:195] Run: which crictl
	I1206 19:56:13.159866  115078 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.1" does not exist at hash "86e0ea23640eb90027102f4e4ff188211c290910bcd9af7402fd429d43d281ff" in container runtime
	I1206 19:56:13.159913  115078 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1206 19:56:13.159963  115078 ssh_runner.go:195] Run: which crictl
	I1206 19:56:13.162288  115078 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I1206 19:56:13.162330  115078 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.1" does not exist at hash "5c4a644510d6f168869cf0620dc09abd02a6fe054538f92a12c7fd7365ffb956" in container runtime
	I1206 19:56:13.162375  115078 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I1206 19:56:13.162378  115078 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1206 19:56:13.162399  115078 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 19:56:13.162407  115078 ssh_runner.go:195] Run: which crictl
	I1206 19:56:13.162523  115078 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1206 19:56:13.162523  115078 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1206 19:56:13.165637  115078 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1206 19:56:13.319155  115078 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I1206 19:56:13.319253  115078 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1206 19:56:13.319274  115078 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I1206 19:56:13.319300  115078 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1206 19:56:13.319371  115078 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.1
	I1206 19:56:13.319394  115078 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1206 19:56:13.319405  115078 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I1206 19:56:13.319423  115078 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1
	I1206 19:56:13.319472  115078 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I1206 19:56:13.319495  115078 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.1
	I1206 19:56:13.319545  115078 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.1
	I1206 19:56:13.319621  115078 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1
	I1206 19:56:13.319546  115078 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1
	I1206 19:56:13.376009  115078 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1 (exists)
	I1206 19:56:13.376036  115078 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1
	I1206 19:56:13.376100  115078 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1
	I1206 19:56:13.376145  115078 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I1206 19:56:13.376179  115078 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1206 19:56:13.376217  115078 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I1206 19:56:13.376273  115078 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1 (exists)
	I1206 19:56:13.376302  115078 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1 (exists)
	I1206 19:56:13.376329  115078 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.1
	I1206 19:56:13.376423  115078 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1
	I1206 19:56:15.530421  115078 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1: (2.153965348s)
	I1206 19:56:15.530466  115078 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1 (exists)
	I1206 19:56:15.530502  115078 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1: (2.154372843s)
	I1206 19:56:15.530536  115078 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.1 from cache
	I1206 19:56:15.530571  115078 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I1206 19:56:15.530630  115078 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I1206 19:56:13.177508  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:15.671903  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:14.963353  115591 pod_ready.go:92] pod "kube-proxy-pt8nj" in "kube-system" namespace has status "Ready":"True"
	I1206 19:56:14.963382  115591 pod_ready.go:81] duration metric: took 718.835702ms waiting for pod "kube-proxy-pt8nj" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:14.963391  115591 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:17.284373  115591 pod_ready.go:102] pod "kube-scheduler-embed-certs-209025" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:19.354814  115078 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.824152707s)
	I1206 19:56:19.354846  115078 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I1206 19:56:19.354874  115078 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1206 19:56:19.354924  115078 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1206 19:56:20.402300  115078 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.047341059s)
	I1206 19:56:20.402334  115078 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1206 19:56:20.402378  115078 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I1206 19:56:20.402442  115078 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I1206 19:56:17.672489  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:20.171526  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:19.771500  115591 pod_ready.go:102] pod "kube-scheduler-embed-certs-209025" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:22.273627  115591 pod_ready.go:102] pod "kube-scheduler-embed-certs-209025" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:23.269993  115591 pod_ready.go:92] pod "kube-scheduler-embed-certs-209025" in "kube-system" namespace has status "Ready":"True"
	I1206 19:56:23.270019  115591 pod_ready.go:81] duration metric: took 8.306621129s waiting for pod "kube-scheduler-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:23.270029  115591 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:22.575204  115078 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.17273177s)
	I1206 19:56:22.575240  115078 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I1206 19:56:22.575270  115078 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1
	I1206 19:56:22.575318  115078 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1
	I1206 19:56:25.335616  115078 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1: (2.760267154s)
	I1206 19:56:25.335646  115078 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.1 from cache
	I1206 19:56:25.335680  115078 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1
	I1206 19:56:25.335760  115078 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1
	I1206 19:56:22.175410  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:24.677136  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:23.486162  115217 kubeadm.go:787] kubelet initialised
	I1206 19:56:23.486192  115217 kubeadm.go:788] duration metric: took 47.560169603s waiting for restarted kubelet to initialise ...
	I1206 19:56:23.486203  115217 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 19:56:23.491797  115217 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-85xcj" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:23.499126  115217 pod_ready.go:92] pod "coredns-5644d7b6d9-85xcj" in "kube-system" namespace has status "Ready":"True"
	I1206 19:56:23.499149  115217 pod_ready.go:81] duration metric: took 7.327003ms waiting for pod "coredns-5644d7b6d9-85xcj" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:23.499160  115217 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-nrtk9" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:23.503979  115217 pod_ready.go:92] pod "coredns-5644d7b6d9-nrtk9" in "kube-system" namespace has status "Ready":"True"
	I1206 19:56:23.504002  115217 pod_ready.go:81] duration metric: took 4.834039ms waiting for pod "coredns-5644d7b6d9-nrtk9" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:23.504014  115217 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-448851" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:23.509110  115217 pod_ready.go:92] pod "etcd-old-k8s-version-448851" in "kube-system" namespace has status "Ready":"True"
	I1206 19:56:23.509132  115217 pod_ready.go:81] duration metric: took 5.109845ms waiting for pod "etcd-old-k8s-version-448851" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:23.509153  115217 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-448851" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:23.514641  115217 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-448851" in "kube-system" namespace has status "Ready":"True"
	I1206 19:56:23.514665  115217 pod_ready.go:81] duration metric: took 5.502762ms waiting for pod "kube-apiserver-old-k8s-version-448851" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:23.514677  115217 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-448851" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:23.886694  115217 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-448851" in "kube-system" namespace has status "Ready":"True"
	I1206 19:56:23.886726  115217 pod_ready.go:81] duration metric: took 372.040617ms waiting for pod "kube-controller-manager-old-k8s-version-448851" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:23.886741  115217 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-sw4qv" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:24.287638  115217 pod_ready.go:92] pod "kube-proxy-sw4qv" in "kube-system" namespace has status "Ready":"True"
	I1206 19:56:24.287662  115217 pod_ready.go:81] duration metric: took 400.914693ms waiting for pod "kube-proxy-sw4qv" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:24.287673  115217 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-448851" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:24.688298  115217 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-448851" in "kube-system" namespace has status "Ready":"True"
	I1206 19:56:24.688328  115217 pod_ready.go:81] duration metric: took 400.645544ms waiting for pod "kube-scheduler-old-k8s-version-448851" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:24.688343  115217 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:26.991669  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:25.288536  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:27.290135  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:29.291318  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:27.610095  115078 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1: (2.274298339s)
	I1206 19:56:27.610132  115078 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.1 from cache
	I1206 19:56:27.610163  115078 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1
	I1206 19:56:27.610219  115078 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1
	I1206 19:56:30.272712  115078 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1: (2.662458967s)
	I1206 19:56:30.272746  115078 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.1 from cache
	I1206 19:56:30.272782  115078 cache_images.go:123] Successfully loaded all cached images
	I1206 19:56:30.272789  115078 cache_images.go:92] LoadImages completed in 17.598011028s
	I1206 19:56:30.272883  115078 ssh_runner.go:195] Run: crio config
	I1206 19:56:30.341321  115078 cni.go:84] Creating CNI manager for ""
	I1206 19:56:30.341346  115078 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 19:56:30.341368  115078 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1206 19:56:30.341392  115078 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.5 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-989559 NodeName:no-preload-989559 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 19:56:30.341597  115078 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-989559"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 19:56:30.341693  115078 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-989559 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.1 ClusterName:no-preload-989559 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1206 19:56:30.341758  115078 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.1
	I1206 19:56:30.351650  115078 binaries.go:44] Found k8s binaries, skipping transfer
	I1206 19:56:30.351729  115078 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 19:56:30.360413  115078 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1206 19:56:30.376399  115078 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1206 19:56:30.392522  115078 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I1206 19:56:30.409313  115078 ssh_runner.go:195] Run: grep 192.168.39.5	control-plane.minikube.internal$ /etc/hosts
	I1206 19:56:30.413355  115078 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.5	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 19:56:30.426797  115078 certs.go:56] Setting up /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/no-preload-989559 for IP: 192.168.39.5
	I1206 19:56:30.426854  115078 certs.go:190] acquiring lock for shared ca certs: {Name:mkf8fbf7b590617ef4dc6c3a4acb742ae26f89ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 19:56:30.427070  115078 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.key
	I1206 19:56:30.427134  115078 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.key
	I1206 19:56:30.427240  115078 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/no-preload-989559/client.key
	I1206 19:56:30.427311  115078 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/no-preload-989559/apiserver.key.c9b343a5
	I1206 19:56:30.427350  115078 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/no-preload-989559/proxy-client.key
	I1206 19:56:30.427454  115078 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834.pem (1338 bytes)
	W1206 19:56:30.427508  115078 certs.go:433] ignoring /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834_empty.pem, impossibly tiny 0 bytes
	I1206 19:56:30.427521  115078 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 19:56:30.427550  115078 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem (1082 bytes)
	I1206 19:56:30.427571  115078 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem (1123 bytes)
	I1206 19:56:30.427593  115078 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem (1679 bytes)
	I1206 19:56:30.427634  115078 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem (1708 bytes)
	I1206 19:56:30.428313  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/no-preload-989559/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1206 19:56:30.452268  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/no-preload-989559/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 19:56:30.476793  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/no-preload-989559/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 19:56:30.503751  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/no-preload-989559/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 19:56:30.530680  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 19:56:30.557770  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 19:56:30.582244  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 19:56:30.608096  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 19:56:30.634585  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834.pem --> /usr/share/ca-certificates/70834.pem (1338 bytes)
	I1206 19:56:30.660669  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem --> /usr/share/ca-certificates/708342.pem (1708 bytes)
	I1206 19:56:30.686987  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 19:56:30.711098  115078 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 19:56:30.727576  115078 ssh_runner.go:195] Run: openssl version
	I1206 19:56:30.733568  115078 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/708342.pem && ln -fs /usr/share/ca-certificates/708342.pem /etc/ssl/certs/708342.pem"
	I1206 19:56:30.743777  115078 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/708342.pem
	I1206 19:56:30.748976  115078 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  6 18:50 /usr/share/ca-certificates/708342.pem
	I1206 19:56:30.749033  115078 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/708342.pem
	I1206 19:56:30.755465  115078 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/708342.pem /etc/ssl/certs/3ec20f2e.0"
	I1206 19:56:30.766285  115078 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1206 19:56:30.777164  115078 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:56:30.782160  115078 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  6 18:41 /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:56:30.782228  115078 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:56:30.789394  115078 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1206 19:56:30.801293  115078 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/70834.pem && ln -fs /usr/share/ca-certificates/70834.pem /etc/ssl/certs/70834.pem"
	I1206 19:56:30.812646  115078 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/70834.pem
	I1206 19:56:30.818147  115078 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  6 18:50 /usr/share/ca-certificates/70834.pem
	I1206 19:56:30.818209  115078 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/70834.pem
	I1206 19:56:30.824161  115078 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/70834.pem /etc/ssl/certs/51391683.0"
	I1206 19:56:30.834389  115078 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1206 19:56:30.839518  115078 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 19:56:30.845997  115078 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 19:56:30.852229  115078 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 19:56:30.858622  115078 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 19:56:30.864675  115078 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 19:56:30.870945  115078 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 19:56:30.878301  115078 kubeadm.go:404] StartCluster: {Name:no-preload-989559 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.1 ClusterName:no-preload-989559 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.5 Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 19:56:30.878438  115078 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 19:56:30.878513  115078 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 19:56:30.921588  115078 cri.go:89] found id: ""
	I1206 19:56:30.921692  115078 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 19:56:30.932160  115078 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1206 19:56:30.932190  115078 kubeadm.go:636] restartCluster start
	I1206 19:56:30.932264  115078 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 19:56:30.942019  115078 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:30.943237  115078 kubeconfig.go:92] found "no-preload-989559" server: "https://192.168.39.5:8443"
	I1206 19:56:30.945618  115078 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 19:56:30.954582  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:30.954655  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:30.966532  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:30.966555  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:30.966602  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:30.979930  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:27.172625  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:29.671318  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:28.992218  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:30.994420  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:31.786922  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:33.787251  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:31.480021  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:31.480135  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:31.493287  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:31.980317  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:31.980409  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:31.994348  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:32.480929  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:32.481020  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:32.494940  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:32.980449  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:32.980559  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:32.993316  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:33.481040  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:33.481150  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:33.494210  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:33.980837  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:33.980936  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:33.994280  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:34.480389  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:34.480492  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:34.493915  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:34.980458  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:34.980569  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:34.994306  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:35.480788  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:35.480897  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:35.495397  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:35.980815  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:35.980919  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:32.171889  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:34.669989  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:33.491932  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:35.492626  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:37.991389  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:35.787950  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:38.288581  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	W1206 19:56:35.994848  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:36.480833  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:36.480959  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:36.496053  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:36.980074  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:36.980197  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:36.994615  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:37.480110  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:37.480203  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:37.494380  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:37.980922  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:37.981009  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:37.994865  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:38.480432  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:38.480536  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:38.494938  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:38.980148  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:38.980250  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:38.995427  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:39.481067  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:39.481153  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:39.494631  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:39.980142  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:39.980255  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:39.991638  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:40.480132  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:40.480205  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:40.492507  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:40.955413  115078 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1206 19:56:40.955478  115078 kubeadm.go:1135] stopping kube-system containers ...
	I1206 19:56:40.955492  115078 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1206 19:56:40.955574  115078 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 19:56:36.673986  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:39.172561  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:41.177049  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:40.490976  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:42.492210  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:40.293997  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:42.789693  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:40.997724  115078 cri.go:89] found id: ""
	I1206 19:56:40.997783  115078 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1206 19:56:41.013137  115078 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 19:56:41.021612  115078 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 19:56:41.021667  115078 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 19:56:41.030846  115078 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1206 19:56:41.030878  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:56:41.160850  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:56:42.395616  115078 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.234715721s)
	I1206 19:56:42.395651  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:56:42.595187  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:56:42.688245  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:56:42.769464  115078 api_server.go:52] waiting for apiserver process to appear ...
	I1206 19:56:42.769566  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:42.783010  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:43.303551  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:43.803070  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:44.303922  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:44.803326  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:45.302954  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:45.323804  115078 api_server.go:72] duration metric: took 2.55435107s to wait for apiserver process to appear ...
	I1206 19:56:45.323839  115078 api_server.go:88] waiting for apiserver healthz status ...
	I1206 19:56:45.323865  115078 api_server.go:253] Checking apiserver healthz at https://192.168.39.5:8443/healthz ...
	I1206 19:56:45.324588  115078 api_server.go:269] stopped: https://192.168.39.5:8443/healthz: Get "https://192.168.39.5:8443/healthz": dial tcp 192.168.39.5:8443: connect: connection refused
	I1206 19:56:45.324632  115078 api_server.go:253] Checking apiserver healthz at https://192.168.39.5:8443/healthz ...
	I1206 19:56:45.325115  115078 api_server.go:269] stopped: https://192.168.39.5:8443/healthz: Get "https://192.168.39.5:8443/healthz": dial tcp 192.168.39.5:8443: connect: connection refused
	I1206 19:56:45.825883  115078 api_server.go:253] Checking apiserver healthz at https://192.168.39.5:8443/healthz ...
	I1206 19:56:43.670089  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:45.670833  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:44.994670  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:47.492548  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:45.288109  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:47.788636  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:49.759033  115078 api_server.go:279] https://192.168.39.5:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1206 19:56:49.759089  115078 api_server.go:103] status: https://192.168.39.5:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1206 19:56:49.759117  115078 api_server.go:253] Checking apiserver healthz at https://192.168.39.5:8443/healthz ...
	I1206 19:56:49.778467  115078 api_server.go:279] https://192.168.39.5:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1206 19:56:49.778502  115078 api_server.go:103] status: https://192.168.39.5:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1206 19:56:49.825793  115078 api_server.go:253] Checking apiserver healthz at https://192.168.39.5:8443/healthz ...
	I1206 19:56:49.888751  115078 api_server.go:279] https://192.168.39.5:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1206 19:56:49.888801  115078 api_server.go:103] status: https://192.168.39.5:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1206 19:56:50.325211  115078 api_server.go:253] Checking apiserver healthz at https://192.168.39.5:8443/healthz ...
	I1206 19:56:50.330395  115078 api_server.go:279] https://192.168.39.5:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1206 19:56:50.330438  115078 api_server.go:103] status: https://192.168.39.5:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1206 19:56:50.826038  115078 api_server.go:253] Checking apiserver healthz at https://192.168.39.5:8443/healthz ...
	I1206 19:56:50.830801  115078 api_server.go:279] https://192.168.39.5:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1206 19:56:50.830836  115078 api_server.go:103] status: https://192.168.39.5:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1206 19:56:51.325298  115078 api_server.go:253] Checking apiserver healthz at https://192.168.39.5:8443/healthz ...
	I1206 19:56:51.331295  115078 api_server.go:279] https://192.168.39.5:8443/healthz returned 200:
	ok
	I1206 19:56:51.340412  115078 api_server.go:141] control plane version: v1.29.0-rc.1
	I1206 19:56:51.340445  115078 api_server.go:131] duration metric: took 6.016598018s to wait for apiserver health ...
	I1206 19:56:51.340457  115078 cni.go:84] Creating CNI manager for ""
	I1206 19:56:51.340465  115078 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 19:56:51.383227  115078 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1206 19:56:47.671090  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:50.173835  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:49.494360  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:51.991886  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:51.385027  115078 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1206 19:56:51.399942  115078 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1206 19:56:51.422533  115078 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 19:56:51.446615  115078 system_pods.go:59] 8 kube-system pods found
	I1206 19:56:51.446661  115078 system_pods.go:61] "coredns-76f75df574-h9pkz" [05501356-bf9b-4a99-a1b9-40d0caef38db] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 19:56:51.446671  115078 system_pods.go:61] "etcd-no-preload-989559" [6c1cb748-a6a8-4583-b8fd-adf37e05b771] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1206 19:56:51.446684  115078 system_pods.go:61] "kube-apiserver-no-preload-989559" [51d8b7c6-0cef-4832-96b2-5040c0725310] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1206 19:56:51.446698  115078 system_pods.go:61] "kube-controller-manager-no-preload-989559" [cc8dfb88-9990-488f-9150-5c643143dcf1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 19:56:51.446707  115078 system_pods.go:61] "kube-proxy-zgqvt" [550b2491-c14f-47c4-82d5-1301fa351305] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1206 19:56:51.446716  115078 system_pods.go:61] "kube-scheduler-no-preload-989559" [53a5031e-51aa-4867-88ff-1c7972a0cfa7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1206 19:56:51.446731  115078 system_pods.go:61] "metrics-server-57f55c9bc5-vz7qc" [97c1bcd2-eabc-4029-bb02-5bbfd4d96c0f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 19:56:51.446739  115078 system_pods.go:61] "storage-provisioner" [c4d98de3-12ec-47f6-a6a6-f1dc61b479be] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 19:56:51.446749  115078 system_pods.go:74] duration metric: took 24.188803ms to wait for pod list to return data ...
	I1206 19:56:51.446758  115078 node_conditions.go:102] verifying NodePressure condition ...
	I1206 19:56:51.452770  115078 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1206 19:56:51.452803  115078 node_conditions.go:123] node cpu capacity is 2
	I1206 19:56:51.452817  115078 node_conditions.go:105] duration metric: took 6.05327ms to run NodePressure ...
	I1206 19:56:51.452840  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:56:51.740786  115078 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1206 19:56:51.746512  115078 kubeadm.go:787] kubelet initialised
	I1206 19:56:51.746541  115078 kubeadm.go:788] duration metric: took 5.720787ms waiting for restarted kubelet to initialise ...
	I1206 19:56:51.746550  115078 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 19:56:51.752751  115078 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-h9pkz" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:51.761003  115078 pod_ready.go:97] node "no-preload-989559" hosting pod "coredns-76f75df574-h9pkz" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:51.761032  115078 pod_ready.go:81] duration metric: took 8.254381ms waiting for pod "coredns-76f75df574-h9pkz" in "kube-system" namespace to be "Ready" ...
	E1206 19:56:51.761043  115078 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-989559" hosting pod "coredns-76f75df574-h9pkz" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:51.761052  115078 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:51.766223  115078 pod_ready.go:97] node "no-preload-989559" hosting pod "etcd-no-preload-989559" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:51.766248  115078 pod_ready.go:81] duration metric: took 5.184525ms waiting for pod "etcd-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	E1206 19:56:51.766259  115078 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-989559" hosting pod "etcd-no-preload-989559" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:51.766271  115078 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:51.771516  115078 pod_ready.go:97] node "no-preload-989559" hosting pod "kube-apiserver-no-preload-989559" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:51.771541  115078 pod_ready.go:81] duration metric: took 5.262069ms waiting for pod "kube-apiserver-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	E1206 19:56:51.771552  115078 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-989559" hosting pod "kube-apiserver-no-preload-989559" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:51.771561  115078 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:51.827774  115078 pod_ready.go:97] node "no-preload-989559" hosting pod "kube-controller-manager-no-preload-989559" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:51.827804  115078 pod_ready.go:81] duration metric: took 56.232455ms waiting for pod "kube-controller-manager-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	E1206 19:56:51.827818  115078 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-989559" hosting pod "kube-controller-manager-no-preload-989559" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:51.827826  115078 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zgqvt" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:52.231699  115078 pod_ready.go:97] node "no-preload-989559" hosting pod "kube-proxy-zgqvt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:52.231761  115078 pod_ready.go:81] duration metric: took 403.922333ms waiting for pod "kube-proxy-zgqvt" in "kube-system" namespace to be "Ready" ...
	E1206 19:56:52.231774  115078 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-989559" hosting pod "kube-proxy-zgqvt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:52.231790  115078 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:52.626827  115078 pod_ready.go:97] node "no-preload-989559" hosting pod "kube-scheduler-no-preload-989559" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:52.626863  115078 pod_ready.go:81] duration metric: took 395.06457ms waiting for pod "kube-scheduler-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	E1206 19:56:52.626877  115078 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-989559" hosting pod "kube-scheduler-no-preload-989559" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:52.626889  115078 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:53.028166  115078 pod_ready.go:97] node "no-preload-989559" hosting pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:53.028201  115078 pod_ready.go:81] duration metric: took 401.294916ms waiting for pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace to be "Ready" ...
	E1206 19:56:53.028214  115078 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-989559" hosting pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:53.028226  115078 pod_ready.go:38] duration metric: took 1.281664253s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 19:56:53.028249  115078 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 19:56:53.057673  115078 ops.go:34] apiserver oom_adj: -16
	I1206 19:56:53.057706  115078 kubeadm.go:640] restartCluster took 22.12550727s
	I1206 19:56:53.057718  115078 kubeadm.go:406] StartCluster complete in 22.179430573s
	I1206 19:56:53.057756  115078 settings.go:142] acquiring lock: {Name:mkfeb988d43ca5824ac2b3af603600358ae0dd6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 19:56:53.057857  115078 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17740-63652/kubeconfig
	I1206 19:56:53.059885  115078 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-63652/kubeconfig: {Name:mkb891a2b2c86b4a1b0f4bb8fd4e51233eb9c683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 19:56:53.060125  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 19:56:53.060244  115078 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1206 19:56:53.060337  115078 addons.go:69] Setting storage-provisioner=true in profile "no-preload-989559"
	I1206 19:56:53.060364  115078 addons.go:231] Setting addon storage-provisioner=true in "no-preload-989559"
	I1206 19:56:53.060367  115078 config.go:182] Loaded profile config "no-preload-989559": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.1
	W1206 19:56:53.060375  115078 addons.go:240] addon storage-provisioner should already be in state true
	I1206 19:56:53.060404  115078 addons.go:69] Setting default-storageclass=true in profile "no-preload-989559"
	I1206 19:56:53.060415  115078 addons.go:69] Setting metrics-server=true in profile "no-preload-989559"
	I1206 19:56:53.060430  115078 addons.go:231] Setting addon metrics-server=true in "no-preload-989559"
	I1206 19:56:53.060433  115078 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-989559"
	W1206 19:56:53.060440  115078 addons.go:240] addon metrics-server should already be in state true
	I1206 19:56:53.060452  115078 host.go:66] Checking if "no-preload-989559" exists ...
	I1206 19:56:53.060472  115078 host.go:66] Checking if "no-preload-989559" exists ...
	I1206 19:56:53.060856  115078 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:56:53.060865  115078 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:56:53.060889  115078 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:56:53.060894  115078 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:56:53.060917  115078 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:56:53.060894  115078 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:56:53.065950  115078 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-989559" context rescaled to 1 replicas
	I1206 19:56:53.065992  115078 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.5 Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 19:56:53.068038  115078 out.go:177] * Verifying Kubernetes components...
	I1206 19:56:53.069775  115078 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 19:56:53.077795  115078 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34735
	I1206 19:56:53.078120  115078 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46235
	I1206 19:56:53.078274  115078 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:56:53.078714  115078 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:56:53.078902  115078 main.go:141] libmachine: Using API Version  1
	I1206 19:56:53.078928  115078 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:56:53.079207  115078 main.go:141] libmachine: Using API Version  1
	I1206 19:56:53.079226  115078 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:56:53.079272  115078 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:56:53.079514  115078 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:56:53.079727  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetState
	I1206 19:56:53.079865  115078 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:56:53.079899  115078 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:56:53.083670  115078 addons.go:231] Setting addon default-storageclass=true in "no-preload-989559"
	W1206 19:56:53.083695  115078 addons.go:240] addon default-storageclass should already be in state true
	I1206 19:56:53.083724  115078 host.go:66] Checking if "no-preload-989559" exists ...
	I1206 19:56:53.084178  115078 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:56:53.084230  115078 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:56:53.097845  115078 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36625
	I1206 19:56:53.098357  115078 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:56:53.099058  115078 main.go:141] libmachine: Using API Version  1
	I1206 19:56:53.099080  115078 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:56:53.099409  115078 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:56:53.099633  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetState
	I1206 19:56:53.101625  115078 main.go:141] libmachine: (no-preload-989559) Calling .DriverName
	I1206 19:56:53.103641  115078 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1206 19:56:53.105081  115078 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44431
	I1206 19:56:53.105105  115078 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1206 19:56:53.105123  115078 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1206 19:56:53.105150  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHHostname
	I1206 19:56:53.104327  115078 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34423
	I1206 19:56:53.105556  115078 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:56:53.105777  115078 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:56:53.105983  115078 main.go:141] libmachine: Using API Version  1
	I1206 19:56:53.105998  115078 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:56:53.106312  115078 main.go:141] libmachine: Using API Version  1
	I1206 19:56:53.106328  115078 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:56:53.106619  115078 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:56:53.106910  115078 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:56:53.107192  115078 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:56:53.107229  115078 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:56:53.107338  115078 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:56:53.107398  115078 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:56:53.108297  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:53.108969  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:53.108999  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:53.109150  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHPort
	I1206 19:56:53.109436  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:53.109586  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHUsername
	I1206 19:56:53.109725  115078 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/no-preload-989559/id_rsa Username:docker}
	I1206 19:56:53.123985  115078 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46161
	I1206 19:56:53.124496  115078 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:56:53.125052  115078 main.go:141] libmachine: Using API Version  1
	I1206 19:56:53.125078  115078 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:56:53.125325  115078 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36509
	I1206 19:56:53.125570  115078 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:56:53.125785  115078 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:56:53.125826  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetState
	I1206 19:56:53.126385  115078 main.go:141] libmachine: Using API Version  1
	I1206 19:56:53.126413  115078 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:56:53.126875  115078 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:56:53.127050  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetState
	I1206 19:56:53.127923  115078 main.go:141] libmachine: (no-preload-989559) Calling .DriverName
	I1206 19:56:53.128212  115078 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 19:56:53.128226  115078 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 19:56:53.128242  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHHostname
	I1206 19:56:53.128747  115078 main.go:141] libmachine: (no-preload-989559) Calling .DriverName
	I1206 19:56:53.131043  115078 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 19:56:53.131487  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:53.132638  115078 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 19:56:53.132645  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:53.132651  115078 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 19:56:53.132667  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHHostname
	I1206 19:56:53.132682  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:53.132132  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHPort
	I1206 19:56:53.133425  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:53.133636  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHUsername
	I1206 19:56:53.133870  115078 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/no-preload-989559/id_rsa Username:docker}
	I1206 19:56:53.136039  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:53.136583  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:53.136612  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:53.136850  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHPort
	I1206 19:56:53.137087  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:53.137390  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHUsername
	I1206 19:56:53.137583  115078 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/no-preload-989559/id_rsa Username:docker}
	I1206 19:56:53.247726  115078 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1206 19:56:53.247751  115078 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1206 19:56:53.271421  115078 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 19:56:53.296149  115078 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1206 19:56:53.296181  115078 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1206 19:56:53.303580  115078 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 19:56:53.350607  115078 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1206 19:56:53.350607  115078 node_ready.go:35] waiting up to 6m0s for node "no-preload-989559" to be "Ready" ...
	I1206 19:56:53.355315  115078 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 19:56:53.355336  115078 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1206 19:56:53.392730  115078 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 19:56:53.624768  115078 main.go:141] libmachine: Making call to close driver server
	I1206 19:56:53.624798  115078 main.go:141] libmachine: (no-preload-989559) Calling .Close
	I1206 19:56:53.625224  115078 main.go:141] libmachine: Successfully made call to close driver server
	I1206 19:56:53.625330  115078 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 19:56:53.625353  115078 main.go:141] libmachine: Making call to close driver server
	I1206 19:56:53.625393  115078 main.go:141] libmachine: (no-preload-989559) Calling .Close
	I1206 19:56:53.625227  115078 main.go:141] libmachine: (no-preload-989559) DBG | Closing plugin on server side
	I1206 19:56:53.625849  115078 main.go:141] libmachine: Successfully made call to close driver server
	I1206 19:56:53.625874  115078 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 19:56:53.632671  115078 main.go:141] libmachine: Making call to close driver server
	I1206 19:56:53.632691  115078 main.go:141] libmachine: (no-preload-989559) Calling .Close
	I1206 19:56:53.632983  115078 main.go:141] libmachine: Successfully made call to close driver server
	I1206 19:56:53.633005  115078 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 19:56:54.433395  115078 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.12977215s)
	I1206 19:56:54.433462  115078 main.go:141] libmachine: Making call to close driver server
	I1206 19:56:54.433491  115078 main.go:141] libmachine: (no-preload-989559) Calling .Close
	I1206 19:56:54.433360  115078 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.040565961s)
	I1206 19:56:54.433546  115078 main.go:141] libmachine: Making call to close driver server
	I1206 19:56:54.433567  115078 main.go:141] libmachine: (no-preload-989559) Calling .Close
	I1206 19:56:54.433833  115078 main.go:141] libmachine: Successfully made call to close driver server
	I1206 19:56:54.433854  115078 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 19:56:54.433863  115078 main.go:141] libmachine: Making call to close driver server
	I1206 19:56:54.433867  115078 main.go:141] libmachine: (no-preload-989559) DBG | Closing plugin on server side
	I1206 19:56:54.433871  115078 main.go:141] libmachine: (no-preload-989559) Calling .Close
	I1206 19:56:54.433842  115078 main.go:141] libmachine: (no-preload-989559) DBG | Closing plugin on server side
	I1206 19:56:54.433908  115078 main.go:141] libmachine: Successfully made call to close driver server
	I1206 19:56:54.433926  115078 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 19:56:54.433939  115078 main.go:141] libmachine: Making call to close driver server
	I1206 19:56:54.433951  115078 main.go:141] libmachine: (no-preload-989559) Calling .Close
	I1206 19:56:54.434124  115078 main.go:141] libmachine: Successfully made call to close driver server
	I1206 19:56:54.434148  115078 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 19:56:54.434153  115078 main.go:141] libmachine: (no-preload-989559) DBG | Closing plugin on server side
	I1206 19:56:54.434199  115078 main.go:141] libmachine: (no-preload-989559) DBG | Closing plugin on server side
	I1206 19:56:54.434212  115078 main.go:141] libmachine: Successfully made call to close driver server
	I1206 19:56:54.434224  115078 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 19:56:54.434240  115078 addons.go:467] Verifying addon metrics-server=true in "no-preload-989559"
	I1206 19:56:54.437357  115078 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1206 19:56:50.289141  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:52.289568  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:54.438928  115078 addons.go:502] enable addons completed in 1.378684523s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1206 19:56:55.439812  115078 node_ready.go:58] node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:52.174520  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:54.175288  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:54.492713  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:56.493106  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:54.789039  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:57.288485  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:59.289450  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:57.931320  115078 node_ready.go:58] node "no-preload-989559" has status "Ready":"False"
	I1206 19:57:00.430485  115078 node_ready.go:49] node "no-preload-989559" has status "Ready":"True"
	I1206 19:57:00.430517  115078 node_ready.go:38] duration metric: took 7.079875254s waiting for node "no-preload-989559" to be "Ready" ...
	I1206 19:57:00.430530  115078 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 19:57:00.436772  115078 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-h9pkz" in "kube-system" namespace to be "Ready" ...
	I1206 19:57:00.442667  115078 pod_ready.go:92] pod "coredns-76f75df574-h9pkz" in "kube-system" namespace has status "Ready":"True"
	I1206 19:57:00.442688  115078 pod_ready.go:81] duration metric: took 5.884841ms waiting for pod "coredns-76f75df574-h9pkz" in "kube-system" namespace to be "Ready" ...
	I1206 19:57:00.442701  115078 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:56.671845  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:59.172634  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:01.175416  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:58.991760  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:00.992295  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:01.787443  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:03.787988  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:02.468096  115078 pod_ready.go:102] pod "etcd-no-preload-989559" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:04.965881  115078 pod_ready.go:92] pod "etcd-no-preload-989559" in "kube-system" namespace has status "Ready":"True"
	I1206 19:57:04.965905  115078 pod_ready.go:81] duration metric: took 4.523195911s waiting for pod "etcd-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	I1206 19:57:04.965916  115078 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	I1206 19:57:04.971414  115078 pod_ready.go:92] pod "kube-apiserver-no-preload-989559" in "kube-system" namespace has status "Ready":"True"
	I1206 19:57:04.971433  115078 pod_ready.go:81] duration metric: took 5.510214ms waiting for pod "kube-apiserver-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	I1206 19:57:04.971441  115078 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	I1206 19:57:04.977851  115078 pod_ready.go:92] pod "kube-controller-manager-no-preload-989559" in "kube-system" namespace has status "Ready":"True"
	I1206 19:57:04.977870  115078 pod_ready.go:81] duration metric: took 6.422623ms waiting for pod "kube-controller-manager-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	I1206 19:57:04.977878  115078 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zgqvt" in "kube-system" namespace to be "Ready" ...
	I1206 19:57:04.985189  115078 pod_ready.go:92] pod "kube-proxy-zgqvt" in "kube-system" namespace has status "Ready":"True"
	I1206 19:57:04.985215  115078 pod_ready.go:81] duration metric: took 7.330713ms waiting for pod "kube-proxy-zgqvt" in "kube-system" namespace to be "Ready" ...
	I1206 19:57:04.985224  115078 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	I1206 19:57:05.230810  115078 pod_ready.go:92] pod "kube-scheduler-no-preload-989559" in "kube-system" namespace has status "Ready":"True"
	I1206 19:57:05.230835  115078 pod_ready.go:81] duration metric: took 245.59313ms waiting for pod "kube-scheduler-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	I1206 19:57:05.230845  115078 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace to be "Ready" ...
	I1206 19:57:03.189551  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:05.673064  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:03.491815  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:05.991689  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:07.992156  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:05.789026  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:07.789964  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:07.538620  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:10.040533  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:08.171042  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:10.671754  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:10.490556  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:12.491886  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:10.287716  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:12.788212  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:12.538291  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:14.541614  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:12.672138  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:15.171421  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:14.992060  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:17.502730  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:14.788301  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:17.287038  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:19.288646  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:17.038893  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:19.543137  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:17.671258  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:20.170885  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:19.991949  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:22.491591  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:21.787339  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:23.788729  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:22.041590  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:24.540137  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:22.171069  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:24.670919  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:24.992198  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:27.492171  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:26.290524  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:28.787761  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:27.039132  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:29.542736  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:27.170762  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:29.171345  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:29.992006  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:32.490556  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:31.288189  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:33.787785  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:32.039418  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:34.039727  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:31.670563  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:34.170705  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:36.171236  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:34.492161  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:36.492522  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:35.788140  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:37.788283  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:36.540765  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:39.038645  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:38.171622  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:40.670580  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:38.990433  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:40.990810  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:42.992228  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:40.287403  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:42.287578  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:44.287701  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:41.039767  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:43.539800  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:45.543374  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:43.173769  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:45.670574  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:44.995625  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:47.492316  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:46.289397  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:48.787659  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:48.038286  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:50.039013  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:48.176705  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:50.670177  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:49.991919  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:52.491478  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:50.788175  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:53.288824  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:52.040785  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:54.538521  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:53.173256  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:55.670940  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:54.492526  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:56.493207  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:55.787745  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:57.788237  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:56.539097  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:59.039024  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:58.174463  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:00.674095  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:58.990652  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:00.993255  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:59.788454  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:02.287774  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:04.288180  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:01.042813  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:03.541670  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:03.171100  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:05.673480  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:03.492375  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:05.991094  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:07.992159  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:06.288916  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:08.289817  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:06.038556  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:08.038962  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:10.539560  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:08.171785  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:10.671152  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:09.993042  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:12.491776  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:10.790823  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:12.791724  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:12.540234  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:14.542433  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:12.672062  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:15.170654  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:14.993921  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:17.492163  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:15.289223  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:17.787808  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:17.038754  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:19.039749  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:17.171210  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:19.670633  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:19.991157  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:21.991531  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:19.788614  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:22.288567  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:21.040007  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:23.047504  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:25.539859  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:21.671920  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:24.173543  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:23.993354  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:26.491975  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:24.789151  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:26.789703  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:29.287981  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:28.038595  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:30.039044  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:26.670809  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:29.171281  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:28.492552  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:30.990797  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:32.991467  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:31.289190  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:33.788860  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:32.046392  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:34.538829  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:31.671784  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:33.672095  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:36.171077  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:34.992478  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:37.492021  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:35.789666  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:38.287860  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:37.038795  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:39.537643  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:38.670088  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:41.171066  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:39.991754  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:41.994379  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:40.288183  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:42.788826  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:41.539212  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:43.543524  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:43.674139  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:46.170213  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:44.491092  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:46.491632  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:45.287473  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:47.288157  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:49.289525  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:46.038254  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:48.039117  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:50.039290  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:48.170319  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:50.671091  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:48.492359  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:50.992132  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:51.787368  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:53.788448  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:52.039474  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:54.540427  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:53.169921  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:55.171727  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:53.492764  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:55.993038  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:56.287644  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:58.288171  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:57.038915  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:59.039626  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:57.671011  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:59.671928  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:58.491565  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:00.492398  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:02.994198  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:00.788591  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:02.789729  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:01.540414  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:03.547448  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:02.172546  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:04.670363  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:05.492399  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:07.991600  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:05.287805  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:07.289128  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:06.039393  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:08.040259  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:10.541882  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:06.670653  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:09.172460  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:10.491981  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:12.991797  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:09.788064  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:12.291318  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:12.544283  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:15.040829  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:11.673737  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:14.172972  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:14.992556  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:17.492610  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:14.788287  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:16.789265  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:19.287925  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:17.542363  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:20.039068  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:16.674724  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:18.675236  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:21.170028  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:19.493199  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:21.992164  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:21.288023  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:23.289315  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:22.539662  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:25.038813  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:23.170153  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:25.172299  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:24.491811  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:26.492671  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:25.788309  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:27.791911  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:27.539832  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:29.540277  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:27.671148  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:30.171591  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:28.990920  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:30.992085  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:32.992394  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:30.288522  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:32.288574  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:31.542448  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:34.039116  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:32.671751  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:35.169968  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:35.492708  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:37.992344  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:34.787925  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:36.788270  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:38.788369  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:36.539113  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:39.040215  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:37.171340  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:39.171482  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:40.491091  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:42.491915  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:40.789138  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:43.287352  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:41.538818  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:43.539787  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:41.670936  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:43.671019  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:45.671158  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:44.992666  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:47.491581  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:45.287493  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:47.787403  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:46.039500  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:48.538497  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:50.539750  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:48.171563  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:50.673901  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:49.991083  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:51.991943  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:49.788072  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:51.788139  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:53.788885  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:53.039532  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:55.539183  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:53.177102  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:55.670778  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:53.992408  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:56.492592  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:56.288587  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:58.288722  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:57.539766  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:00.038890  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:58.171948  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:00.173211  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:58.492926  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:00.992517  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:02.992971  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:00.291465  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:02.292084  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:02.039986  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:04.541022  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:02.674513  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:04.407290  115497 pod_ready.go:81] duration metric: took 4m0.000215571s waiting for pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace to be "Ready" ...
	E1206 20:00:04.407325  115497 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1206 20:00:04.407343  115497 pod_ready.go:38] duration metric: took 4m12.62023597s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 20:00:04.407376  115497 kubeadm.go:640] restartCluster took 4m33.115368763s
	W1206 20:00:04.407460  115497 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1206 20:00:04.407558  115497 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1206 20:00:05.492129  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:07.493228  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:04.788290  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:06.789396  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:08.789507  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:06.541064  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:09.040499  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:09.992817  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:12.492671  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:11.288813  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:13.788228  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:11.540420  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:13.540837  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:14.492803  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:16.991852  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:18.762771  115497 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.35517444s)
	I1206 20:00:18.762878  115497 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 20:00:18.777691  115497 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 20:00:18.788508  115497 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 20:00:18.798417  115497 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 20:00:18.798483  115497 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1206 20:00:18.858377  115497 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1206 20:00:18.858486  115497 kubeadm.go:322] [preflight] Running pre-flight checks
	I1206 20:00:19.020664  115497 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 20:00:19.020845  115497 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 20:00:19.020979  115497 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1206 20:00:19.294254  115497 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 20:00:15.788560  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:18.288173  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:19.296186  115497 out.go:204]   - Generating certificates and keys ...
	I1206 20:00:19.296294  115497 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1206 20:00:19.296394  115497 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1206 20:00:19.296512  115497 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 20:00:19.296601  115497 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1206 20:00:19.296712  115497 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 20:00:19.296779  115497 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1206 20:00:19.296938  115497 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1206 20:00:19.297044  115497 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1206 20:00:19.297141  115497 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 20:00:19.297228  115497 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 20:00:19.297296  115497 kubeadm.go:322] [certs] Using the existing "sa" key
	I1206 20:00:19.297374  115497 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 20:00:19.401712  115497 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 20:00:19.667664  115497 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 20:00:19.977926  115497 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 20:00:20.161984  115497 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 20:00:20.162704  115497 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 20:00:20.165273  115497 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 20:00:16.040687  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:18.540495  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:20.167168  115497 out.go:204]   - Booting up control plane ...
	I1206 20:00:20.167327  115497 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 20:00:20.167488  115497 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 20:00:20.167596  115497 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 20:00:20.186839  115497 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 20:00:20.187950  115497 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 20:00:20.188122  115497 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1206 20:00:20.329099  115497 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1206 20:00:18.991946  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:21.490687  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:20.290780  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:22.293161  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:23.270450  115591 pod_ready.go:81] duration metric: took 4m0.000401122s waiting for pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace to be "Ready" ...
	E1206 20:00:23.270504  115591 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1206 20:00:23.270527  115591 pod_ready.go:38] duration metric: took 4m9.100871827s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 20:00:23.270576  115591 kubeadm.go:640] restartCluster took 4m28.999844958s
	W1206 20:00:23.270666  115591 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1206 20:00:23.270705  115591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1206 20:00:21.040410  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:23.041625  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:25.044168  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:23.492875  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:24.689131  115217 pod_ready.go:81] duration metric: took 4m0.000750192s waiting for pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace to be "Ready" ...
	E1206 20:00:24.689173  115217 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1206 20:00:24.689203  115217 pod_ready.go:38] duration metric: took 4m1.202987977s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 20:00:24.689247  115217 kubeadm.go:640] restartCluster took 5m10.459408033s
	W1206 20:00:24.689356  115217 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1206 20:00:24.689392  115217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1206 20:00:29.334312  115497 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.004152 seconds
	I1206 20:00:29.334473  115497 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 20:00:29.360390  115497 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 20:00:29.898911  115497 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 20:00:29.899167  115497 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-380424 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1206 20:00:30.416589  115497 kubeadm.go:322] [bootstrap-token] Using token: gsw79m.btql0t11yc11efah
	I1206 20:00:30.418388  115497 out.go:204]   - Configuring RBAC rules ...
	I1206 20:00:30.418538  115497 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 20:00:30.424651  115497 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1206 20:00:30.439637  115497 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 20:00:30.443854  115497 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 20:00:30.448439  115497 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 20:00:30.454084  115497 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 20:00:30.473340  115497 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1206 20:00:30.748803  115497 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1206 20:00:30.835721  115497 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1206 20:00:30.837289  115497 kubeadm.go:322] 
	I1206 20:00:30.837362  115497 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1206 20:00:30.837381  115497 kubeadm.go:322] 
	I1206 20:00:30.837449  115497 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1206 20:00:30.837457  115497 kubeadm.go:322] 
	I1206 20:00:30.837485  115497 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1206 20:00:30.837597  115497 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 20:00:30.837675  115497 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 20:00:30.837684  115497 kubeadm.go:322] 
	I1206 20:00:30.837760  115497 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1206 20:00:30.837770  115497 kubeadm.go:322] 
	I1206 20:00:30.837826  115497 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1206 20:00:30.837833  115497 kubeadm.go:322] 
	I1206 20:00:30.837899  115497 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1206 20:00:30.838016  115497 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 20:00:30.838114  115497 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 20:00:30.838124  115497 kubeadm.go:322] 
	I1206 20:00:30.838224  115497 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1206 20:00:30.838316  115497 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1206 20:00:30.838333  115497 kubeadm.go:322] 
	I1206 20:00:30.838409  115497 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token gsw79m.btql0t11yc11efah \
	I1206 20:00:30.838522  115497 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3173d0205a58c67077c3594ee458dc14bc41fcece32682cbfd9ea1126e12b817 \
	I1206 20:00:30.838559  115497 kubeadm.go:322] 	--control-plane 
	I1206 20:00:30.838568  115497 kubeadm.go:322] 
	I1206 20:00:30.838686  115497 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1206 20:00:30.838699  115497 kubeadm.go:322] 
	I1206 20:00:30.838805  115497 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token gsw79m.btql0t11yc11efah \
	I1206 20:00:30.838952  115497 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3173d0205a58c67077c3594ee458dc14bc41fcece32682cbfd9ea1126e12b817 
	I1206 20:00:30.839686  115497 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 20:00:30.839714  115497 cni.go:84] Creating CNI manager for ""
	I1206 20:00:30.839727  115497 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 20:00:30.841824  115497 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1206 20:00:27.540848  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:30.038457  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:30.843246  115497 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1206 20:00:30.916583  115497 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1206 20:00:30.974088  115497 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 20:00:30.974183  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:30.974183  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=31a3600ce72029d920a55140bbc6d0705e357503 minikube.k8s.io/name=default-k8s-diff-port-380424 minikube.k8s.io/updated_at=2023_12_06T20_00_30_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:31.400910  115497 ops.go:34] apiserver oom_adj: -16
	I1206 20:00:31.401056  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:31.320362  115217 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (6.630947418s)
	I1206 20:00:31.320445  115217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 20:00:31.349765  115217 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 20:00:31.369412  115217 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 20:00:31.381350  115217 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 20:00:31.381410  115217 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1206 20:00:31.626397  115217 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 20:00:32.039425  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:34.041934  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:31.516285  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:32.139221  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:32.639059  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:33.139995  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:33.639038  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:34.139842  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:34.640037  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:35.139893  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:35.639961  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:36.139749  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:38.383787  115591 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (15.113041618s)
	I1206 20:00:38.383859  115591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 20:00:38.397718  115591 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 20:00:38.406748  115591 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 20:00:38.415574  115591 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 20:00:38.415633  115591 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1206 20:00:38.485595  115591 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1206 20:00:38.485781  115591 kubeadm.go:322] [preflight] Running pre-flight checks
	I1206 20:00:38.659892  115591 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 20:00:38.660073  115591 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 20:00:38.660209  115591 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1206 20:00:38.939756  115591 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 20:00:38.941971  115591 out.go:204]   - Generating certificates and keys ...
	I1206 20:00:38.942103  115591 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1206 20:00:38.942200  115591 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1206 20:00:38.942296  115591 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 20:00:38.942708  115591 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1206 20:00:38.943817  115591 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 20:00:38.944130  115591 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1206 20:00:38.944894  115591 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1206 20:00:38.945607  115591 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1206 20:00:38.946355  115591 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 20:00:38.947015  115591 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 20:00:38.947720  115591 kubeadm.go:322] [certs] Using the existing "sa" key
	I1206 20:00:38.947795  115591 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 20:00:39.140045  115591 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 20:00:39.300047  115591 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 20:00:39.418439  115591 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 20:00:40.060938  115591 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 20:00:40.061616  115591 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 20:00:40.064208  115591 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 20:00:36.042049  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:38.540429  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:36.639372  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:37.139213  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:37.639506  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:38.139159  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:38.639007  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:39.139972  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:39.639969  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:40.139910  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:40.639836  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:41.139009  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:41.639153  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:42.139055  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:42.639853  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:43.139934  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:43.639741  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:44.139776  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:44.279581  115497 kubeadm.go:1088] duration metric: took 13.305461955s to wait for elevateKubeSystemPrivileges.
	I1206 20:00:44.279625  115497 kubeadm.go:406] StartCluster complete in 5m13.04588426s
	I1206 20:00:44.279660  115497 settings.go:142] acquiring lock: {Name:mkfeb988d43ca5824ac2b3af603600358ae0dd6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 20:00:44.279765  115497 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17740-63652/kubeconfig
	I1206 20:00:44.282748  115497 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-63652/kubeconfig: {Name:mkb891a2b2c86b4a1b0f4bb8fd4e51233eb9c683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 20:00:44.285263  115497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 20:00:44.285351  115497 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1206 20:00:44.285434  115497 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-380424"
	I1206 20:00:44.285459  115497 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-380424"
	W1206 20:00:44.285471  115497 addons.go:240] addon storage-provisioner should already be in state true
	I1206 20:00:44.285478  115497 config.go:182] Loaded profile config "default-k8s-diff-port-380424": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 20:00:44.285531  115497 host.go:66] Checking if "default-k8s-diff-port-380424" exists ...
	I1206 20:00:44.285542  115497 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-380424"
	I1206 20:00:44.285561  115497 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-380424"
	I1206 20:00:44.285719  115497 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-380424"
	I1206 20:00:44.285738  115497 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-380424"
	W1206 20:00:44.285747  115497 addons.go:240] addon metrics-server should already be in state true
	I1206 20:00:44.285797  115497 host.go:66] Checking if "default-k8s-diff-port-380424" exists ...
	I1206 20:00:44.285998  115497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:00:44.285998  115497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:00:44.286023  115497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:00:44.286026  115497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:00:44.286167  115497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:00:44.286190  115497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:00:44.306223  115497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41495
	I1206 20:00:44.306441  115497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39661
	I1206 20:00:44.307505  115497 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:00:44.307637  115497 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:00:44.308463  115497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41881
	I1206 20:00:44.308651  115497 main.go:141] libmachine: Using API Version  1
	I1206 20:00:44.308672  115497 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:00:44.309154  115497 main.go:141] libmachine: Using API Version  1
	I1206 20:00:44.309173  115497 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:00:44.309295  115497 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:00:44.309539  115497 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:00:44.310150  115497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:00:44.310183  115497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:00:44.310431  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetState
	I1206 20:00:44.312432  115497 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:00:44.313004  115497 main.go:141] libmachine: Using API Version  1
	I1206 20:00:44.313020  115497 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:00:44.315047  115497 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-380424"
	W1206 20:00:44.315065  115497 addons.go:240] addon default-storageclass should already be in state true
	I1206 20:00:44.315094  115497 host.go:66] Checking if "default-k8s-diff-port-380424" exists ...
	I1206 20:00:44.315499  115497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:00:44.315523  115497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:00:44.316248  115497 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:00:44.316893  115497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:00:44.316920  115497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:00:44.335555  115497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43199
	I1206 20:00:44.335908  115497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45127
	I1206 20:00:44.336636  115497 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:00:44.336749  115497 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:00:44.337379  115497 main.go:141] libmachine: Using API Version  1
	I1206 20:00:44.337404  115497 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:00:44.337791  115497 main.go:141] libmachine: Using API Version  1
	I1206 20:00:44.337818  115497 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:00:44.337895  115497 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:00:44.338474  115497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:00:44.338502  115497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:00:44.338944  115497 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-380424" context rescaled to 1 replicas
	I1206 20:00:44.338979  115497 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.22 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 20:00:44.340731  115497 out.go:177] * Verifying Kubernetes components...
	I1206 20:00:44.339696  115497 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:00:44.342367  115497 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 20:00:44.342537  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetState
	I1206 20:00:44.348774  115497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35461
	I1206 20:00:44.348808  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .DriverName
	I1206 20:00:44.350935  115497 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1206 20:00:44.349433  115497 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:00:44.353022  115497 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1206 20:00:44.353036  115497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1206 20:00:44.353060  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHHostname
	I1206 20:00:44.353493  115497 main.go:141] libmachine: Using API Version  1
	I1206 20:00:44.353512  115497 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:00:44.354850  115497 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:00:44.355732  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetState
	I1206 20:00:44.356894  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 20:00:44.359438  115497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38795
	I1206 20:00:44.360009  115497 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:00:44.360502  115497 main.go:141] libmachine: Using API Version  1
	I1206 20:00:44.360525  115497 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:00:44.360899  115497 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:00:44.361092  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetState
	I1206 20:00:44.362575  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 20:00:44.362605  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 20:00:44.362663  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHPort
	I1206 20:00:44.363067  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 20:00:44.363259  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHUsername
	I1206 20:00:44.363310  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .DriverName
	I1206 20:00:44.363544  115497 sshutil.go:53] new ssh client: &{IP:192.168.72.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/default-k8s-diff-port-380424/id_rsa Username:docker}
	I1206 20:00:44.363628  115497 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 20:00:44.363643  115497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 20:00:44.363663  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHHostname
	I1206 20:00:44.365352  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .DriverName
	I1206 20:00:44.367261  115497 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 20:00:40.066048  115591 out.go:204]   - Booting up control plane ...
	I1206 20:00:40.066207  115591 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 20:00:40.066320  115591 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 20:00:40.069077  115591 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 20:00:40.086558  115591 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 20:00:40.087856  115591 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 20:00:40.087969  115591 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1206 20:00:40.224157  115591 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1206 20:00:45.313051  115217 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1206 20:00:45.313125  115217 kubeadm.go:322] [preflight] Running pre-flight checks
	I1206 20:00:45.313226  115217 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 20:00:45.313355  115217 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 20:00:45.313466  115217 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1206 20:00:45.313591  115217 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 20:00:45.313697  115217 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 20:00:45.313767  115217 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1206 20:00:45.313844  115217 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 20:00:45.315759  115217 out.go:204]   - Generating certificates and keys ...
	I1206 20:00:45.315876  115217 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1206 20:00:45.315980  115217 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1206 20:00:45.316085  115217 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 20:00:45.316158  115217 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1206 20:00:45.316252  115217 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 20:00:45.316320  115217 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1206 20:00:45.316420  115217 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1206 20:00:45.316505  115217 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1206 20:00:45.316608  115217 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 20:00:45.316707  115217 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 20:00:45.316761  115217 kubeadm.go:322] [certs] Using the existing "sa" key
	I1206 20:00:45.316838  115217 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 20:00:45.316909  115217 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 20:00:45.316982  115217 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 20:00:45.317068  115217 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 20:00:45.317136  115217 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 20:00:45.317221  115217 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 20:00:45.318915  115217 out.go:204]   - Booting up control plane ...
	I1206 20:00:45.319042  115217 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 20:00:45.319145  115217 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 20:00:45.319253  115217 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 20:00:45.319367  115217 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 20:00:45.319568  115217 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1206 20:00:45.319690  115217 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.504419 seconds
	I1206 20:00:45.319828  115217 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 20:00:45.319978  115217 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 20:00:45.320042  115217 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 20:00:45.320189  115217 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-448851 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1206 20:00:45.320255  115217 kubeadm.go:322] [bootstrap-token] Using token: ms33mw.f0m2wm1rokle0nnu
	I1206 20:00:45.321976  115217 out.go:204]   - Configuring RBAC rules ...
	I1206 20:00:45.322105  115217 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 20:00:45.322229  115217 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 20:00:45.322373  115217 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 20:00:45.322532  115217 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 20:00:45.322673  115217 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 20:00:45.322759  115217 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1206 20:00:45.322845  115217 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1206 20:00:45.322857  115217 kubeadm.go:322] 
	I1206 20:00:45.322936  115217 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1206 20:00:45.322945  115217 kubeadm.go:322] 
	I1206 20:00:45.323055  115217 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1206 20:00:45.323071  115217 kubeadm.go:322] 
	I1206 20:00:45.323105  115217 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1206 20:00:45.323196  115217 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 20:00:45.323270  115217 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 20:00:45.323282  115217 kubeadm.go:322] 
	I1206 20:00:45.323373  115217 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1206 20:00:45.323477  115217 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 20:00:45.323575  115217 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 20:00:45.323590  115217 kubeadm.go:322] 
	I1206 20:00:45.323736  115217 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1206 20:00:45.323840  115217 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1206 20:00:45.323855  115217 kubeadm.go:322] 
	I1206 20:00:45.323984  115217 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ms33mw.f0m2wm1rokle0nnu \
	I1206 20:00:45.324187  115217 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:3173d0205a58c67077c3594ee458dc14bc41fcece32682cbfd9ea1126e12b817 \
	I1206 20:00:45.324248  115217 kubeadm.go:322]     --control-plane 	  
	I1206 20:00:45.324266  115217 kubeadm.go:322] 
	I1206 20:00:45.324386  115217 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1206 20:00:45.324397  115217 kubeadm.go:322] 
	I1206 20:00:45.324501  115217 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ms33mw.f0m2wm1rokle0nnu \
	I1206 20:00:45.324651  115217 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:3173d0205a58c67077c3594ee458dc14bc41fcece32682cbfd9ea1126e12b817 
	I1206 20:00:45.324664  115217 cni.go:84] Creating CNI manager for ""
	I1206 20:00:45.324675  115217 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 20:00:45.327284  115217 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1206 20:00:41.039495  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:43.041892  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:45.042744  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:44.369437  115497 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 20:00:44.369449  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 20:00:44.369458  115497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 20:00:44.369482  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHHostname
	I1206 20:00:44.373360  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 20:00:44.373394  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 20:00:44.373415  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 20:00:44.373465  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHPort
	I1206 20:00:44.373538  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 20:00:44.373554  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 20:00:44.373769  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 20:00:44.373830  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHPort
	I1206 20:00:44.374020  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 20:00:44.374077  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHUsername
	I1206 20:00:44.374221  115497 sshutil.go:53] new ssh client: &{IP:192.168.72.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/default-k8s-diff-port-380424/id_rsa Username:docker}
	I1206 20:00:44.374800  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHUsername
	I1206 20:00:44.375017  115497 sshutil.go:53] new ssh client: &{IP:192.168.72.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/default-k8s-diff-port-380424/id_rsa Username:docker}
	I1206 20:00:44.528574  115497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 20:00:44.553349  115497 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1206 20:00:44.553382  115497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1206 20:00:44.604100  115497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 20:00:44.605360  115497 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-380424" to be "Ready" ...
	I1206 20:00:44.605799  115497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 20:00:44.610007  115497 node_ready.go:49] node "default-k8s-diff-port-380424" has status "Ready":"True"
	I1206 20:00:44.610039  115497 node_ready.go:38] duration metric: took 4.647914ms waiting for node "default-k8s-diff-port-380424" to be "Ready" ...
	I1206 20:00:44.610052  115497 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 20:00:44.622684  115497 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-x6p7t" in "kube-system" namespace to be "Ready" ...
	I1206 20:00:44.639914  115497 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1206 20:00:44.640005  115497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1206 20:00:44.710284  115497 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 20:00:44.710318  115497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1206 20:00:44.767014  115497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 20:00:46.656182  115497 pod_ready.go:102] pod "coredns-5dd5756b68-x6p7t" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:46.941717  115497 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.413097049s)
	I1206 20:00:46.941764  115497 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.33594011s)
	I1206 20:00:46.941787  115497 start.go:929] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1206 20:00:46.941793  115497 main.go:141] libmachine: Making call to close driver server
	I1206 20:00:46.941733  115497 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.337595925s)
	I1206 20:00:46.941808  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .Close
	I1206 20:00:46.941841  115497 main.go:141] libmachine: Making call to close driver server
	I1206 20:00:46.941863  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .Close
	I1206 20:00:46.942167  115497 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:00:46.942187  115497 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:00:46.942198  115497 main.go:141] libmachine: Making call to close driver server
	I1206 20:00:46.942207  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .Close
	I1206 20:00:46.943997  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | Closing plugin on server side
	I1206 20:00:46.944031  115497 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:00:46.944041  115497 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:00:46.944052  115497 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:00:46.944060  115497 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:00:46.944057  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | Closing plugin on server side
	I1206 20:00:46.944077  115497 main.go:141] libmachine: Making call to close driver server
	I1206 20:00:46.944088  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .Close
	I1206 20:00:46.944363  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | Closing plugin on server side
	I1206 20:00:46.944401  115497 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:00:46.944419  115497 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:00:46.984172  115497 main.go:141] libmachine: Making call to close driver server
	I1206 20:00:46.984206  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .Close
	I1206 20:00:46.984675  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | Closing plugin on server side
	I1206 20:00:46.984714  115497 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:00:46.984733  115497 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:00:47.345448  115497 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.5783821s)
	I1206 20:00:47.345552  115497 main.go:141] libmachine: Making call to close driver server
	I1206 20:00:47.345573  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .Close
	I1206 20:00:47.345987  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | Closing plugin on server side
	I1206 20:00:47.346033  115497 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:00:47.346046  115497 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:00:47.346056  115497 main.go:141] libmachine: Making call to close driver server
	I1206 20:00:47.346088  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .Close
	I1206 20:00:47.346359  115497 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:00:47.346380  115497 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:00:47.346392  115497 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-380424"
	I1206 20:00:47.346442  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | Closing plugin on server side
	I1206 20:00:47.348281  115497 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1206 20:00:45.328763  115217 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1206 20:00:45.342986  115217 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1206 20:00:45.373351  115217 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 20:00:45.373503  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:45.373559  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=31a3600ce72029d920a55140bbc6d0705e357503 minikube.k8s.io/name=old-k8s-version-448851 minikube.k8s.io/updated_at=2023_12_06T20_00_45_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:45.701779  115217 ops.go:34] apiserver oom_adj: -16
	I1206 20:00:45.701907  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:45.815705  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:46.445065  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:46.945361  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:47.444737  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:47.945540  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:49.228883  115591 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.004688 seconds
	I1206 20:00:49.229058  115591 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 20:00:49.258512  115591 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 20:00:49.793797  115591 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 20:00:49.794010  115591 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-209025 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1206 20:00:50.315415  115591 kubeadm.go:322] [bootstrap-token] Using token: j4xv0f.htia0y0wrnbqnji6
	I1206 20:00:47.349693  115497 addons.go:502] enable addons completed in 3.064343142s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1206 20:00:48.648085  115497 pod_ready.go:92] pod "coredns-5dd5756b68-x6p7t" in "kube-system" namespace has status "Ready":"True"
	I1206 20:00:48.648116  115497 pod_ready.go:81] duration metric: took 4.025396521s waiting for pod "coredns-5dd5756b68-x6p7t" in "kube-system" namespace to be "Ready" ...
	I1206 20:00:48.648132  115497 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 20:00:48.660202  115497 pod_ready.go:92] pod "etcd-default-k8s-diff-port-380424" in "kube-system" namespace has status "Ready":"True"
	I1206 20:00:48.660235  115497 pod_ready.go:81] duration metric: took 12.09317ms waiting for pod "etcd-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 20:00:48.660248  115497 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 20:00:48.666568  115497 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-380424" in "kube-system" namespace has status "Ready":"True"
	I1206 20:00:48.666666  115497 pod_ready.go:81] duration metric: took 6.407781ms waiting for pod "kube-apiserver-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 20:00:48.666694  115497 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 20:00:48.679566  115497 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-380424" in "kube-system" namespace has status "Ready":"True"
	I1206 20:00:48.679653  115497 pod_ready.go:81] duration metric: took 12.938485ms waiting for pod "kube-controller-manager-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 20:00:48.679675  115497 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-khh5n" in "kube-system" namespace to be "Ready" ...
	I1206 20:00:49.554241  115497 pod_ready.go:92] pod "kube-proxy-khh5n" in "kube-system" namespace has status "Ready":"True"
	I1206 20:00:49.554266  115497 pod_ready.go:81] duration metric: took 874.584613ms waiting for pod "kube-proxy-khh5n" in "kube-system" namespace to be "Ready" ...
	I1206 20:00:49.554275  115497 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 20:00:49.845110  115497 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-380424" in "kube-system" namespace has status "Ready":"True"
	I1206 20:00:49.845140  115497 pod_ready.go:81] duration metric: took 290.857787ms waiting for pod "kube-scheduler-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 20:00:49.845152  115497 pod_ready.go:38] duration metric: took 5.235087469s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 20:00:49.845172  115497 api_server.go:52] waiting for apiserver process to appear ...
	I1206 20:00:49.845251  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 20:00:49.861908  115497 api_server.go:72] duration metric: took 5.522870891s to wait for apiserver process to appear ...
	I1206 20:00:49.861943  115497 api_server.go:88] waiting for apiserver healthz status ...
	I1206 20:00:49.861965  115497 api_server.go:253] Checking apiserver healthz at https://192.168.72.22:8444/healthz ...
	I1206 20:00:49.868675  115497 api_server.go:279] https://192.168.72.22:8444/healthz returned 200:
	ok
	I1206 20:00:49.870214  115497 api_server.go:141] control plane version: v1.28.4
	I1206 20:00:49.870254  115497 api_server.go:131] duration metric: took 8.303187ms to wait for apiserver health ...
	I1206 20:00:49.870266  115497 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 20:00:50.047974  115497 system_pods.go:59] 8 kube-system pods found
	I1206 20:00:50.048004  115497 system_pods.go:61] "coredns-5dd5756b68-x6p7t" [de75d299-fede-4fe1-a748-31720acc76eb] Running
	I1206 20:00:50.048011  115497 system_pods.go:61] "etcd-default-k8s-diff-port-380424" [36170db0-a926-4c8d-8283-9af453167ee1] Running
	I1206 20:00:50.048018  115497 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-380424" [72412f12-9e20-4905-89ad-65c67a2e5a7b] Running
	I1206 20:00:50.048025  115497 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-380424" [04d32349-9a28-4270-bd15-2275e74b6713] Running
	I1206 20:00:50.048030  115497 system_pods.go:61] "kube-proxy-khh5n" [acac843d-9849-4bda-af66-2422b319665e] Running
	I1206 20:00:50.048036  115497 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-380424" [a5b9f2ed-8cb1-4912-af86-d231d9b275ba] Running
	I1206 20:00:50.048045  115497 system_pods.go:61] "metrics-server-57f55c9bc5-xpbtp" [280fb2bc-d8d8-4684-8be1-ec0ace47ef77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:00:50.048052  115497 system_pods.go:61] "storage-provisioner" [e1def8b1-c6bb-48df-b2f2-34867a409cb7] Running
	I1206 20:00:50.048063  115497 system_pods.go:74] duration metric: took 177.789423ms to wait for pod list to return data ...
	I1206 20:00:50.048073  115497 default_sa.go:34] waiting for default service account to be created ...
	I1206 20:00:50.246867  115497 default_sa.go:45] found service account: "default"
	I1206 20:00:50.246903  115497 default_sa.go:55] duration metric: took 198.823117ms for default service account to be created ...
	I1206 20:00:50.246914  115497 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 20:00:50.447688  115497 system_pods.go:86] 8 kube-system pods found
	I1206 20:00:50.447777  115497 system_pods.go:89] "coredns-5dd5756b68-x6p7t" [de75d299-fede-4fe1-a748-31720acc76eb] Running
	I1206 20:00:50.447798  115497 system_pods.go:89] "etcd-default-k8s-diff-port-380424" [36170db0-a926-4c8d-8283-9af453167ee1] Running
	I1206 20:00:50.447815  115497 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-380424" [72412f12-9e20-4905-89ad-65c67a2e5a7b] Running
	I1206 20:00:50.447846  115497 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-380424" [04d32349-9a28-4270-bd15-2275e74b6713] Running
	I1206 20:00:50.447870  115497 system_pods.go:89] "kube-proxy-khh5n" [acac843d-9849-4bda-af66-2422b319665e] Running
	I1206 20:00:50.447886  115497 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-380424" [a5b9f2ed-8cb1-4912-af86-d231d9b275ba] Running
	I1206 20:00:50.447904  115497 system_pods.go:89] "metrics-server-57f55c9bc5-xpbtp" [280fb2bc-d8d8-4684-8be1-ec0ace47ef77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:00:50.447920  115497 system_pods.go:89] "storage-provisioner" [e1def8b1-c6bb-48df-b2f2-34867a409cb7] Running
	I1206 20:00:50.447953  115497 system_pods.go:126] duration metric: took 201.030369ms to wait for k8s-apps to be running ...
	I1206 20:00:50.447978  115497 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 20:00:50.448057  115497 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 20:00:50.468801  115497 system_svc.go:56] duration metric: took 20.810606ms WaitForService to wait for kubelet.
	I1206 20:00:50.468837  115497 kubeadm.go:581] duration metric: took 6.129827661s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1206 20:00:50.468860  115497 node_conditions.go:102] verifying NodePressure condition ...
	I1206 20:00:50.646083  115497 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1206 20:00:50.646124  115497 node_conditions.go:123] node cpu capacity is 2
	I1206 20:00:50.646138  115497 node_conditions.go:105] duration metric: took 177.272089ms to run NodePressure ...
	I1206 20:00:50.646153  115497 start.go:228] waiting for startup goroutines ...
	I1206 20:00:50.646164  115497 start.go:233] waiting for cluster config update ...
	I1206 20:00:50.646184  115497 start.go:242] writing updated cluster config ...
	I1206 20:00:50.646551  115497 ssh_runner.go:195] Run: rm -f paused
	I1206 20:00:50.711246  115497 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1206 20:00:50.713989  115497 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-380424" cluster and "default" namespace by default
	I1206 20:00:50.317018  115591 out.go:204]   - Configuring RBAC rules ...
	I1206 20:00:50.317155  115591 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 20:00:50.325410  115591 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1206 20:00:50.335197  115591 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 20:00:50.339351  115591 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 20:00:50.343930  115591 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 20:00:50.352323  115591 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 20:00:50.375514  115591 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1206 20:00:50.703397  115591 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1206 20:00:50.753323  115591 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1206 20:00:50.753351  115591 kubeadm.go:322] 
	I1206 20:00:50.753419  115591 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1206 20:00:50.753430  115591 kubeadm.go:322] 
	I1206 20:00:50.753522  115591 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1206 20:00:50.753539  115591 kubeadm.go:322] 
	I1206 20:00:50.753570  115591 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1206 20:00:50.753642  115591 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 20:00:50.753706  115591 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 20:00:50.753717  115591 kubeadm.go:322] 
	I1206 20:00:50.753780  115591 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1206 20:00:50.753790  115591 kubeadm.go:322] 
	I1206 20:00:50.753847  115591 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1206 20:00:50.753862  115591 kubeadm.go:322] 
	I1206 20:00:50.753928  115591 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1206 20:00:50.754020  115591 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 20:00:50.754109  115591 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 20:00:50.754120  115591 kubeadm.go:322] 
	I1206 20:00:50.754221  115591 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1206 20:00:50.754317  115591 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1206 20:00:50.754327  115591 kubeadm.go:322] 
	I1206 20:00:50.754426  115591 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token j4xv0f.htia0y0wrnbqnji6 \
	I1206 20:00:50.754552  115591 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3173d0205a58c67077c3594ee458dc14bc41fcece32682cbfd9ea1126e12b817 \
	I1206 20:00:50.754583  115591 kubeadm.go:322] 	--control-plane 
	I1206 20:00:50.754593  115591 kubeadm.go:322] 
	I1206 20:00:50.754690  115591 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1206 20:00:50.754707  115591 kubeadm.go:322] 
	I1206 20:00:50.754802  115591 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token j4xv0f.htia0y0wrnbqnji6 \
	I1206 20:00:50.754931  115591 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3173d0205a58c67077c3594ee458dc14bc41fcece32682cbfd9ea1126e12b817 
	I1206 20:00:50.755776  115591 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 20:00:50.755809  115591 cni.go:84] Creating CNI manager for ""
	I1206 20:00:50.755820  115591 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 20:00:50.759045  115591 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1206 20:00:47.539932  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:50.039553  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:48.445172  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:48.944908  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:49.445418  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:49.944612  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:50.445278  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:50.944545  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:51.444775  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:51.945470  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:52.445365  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:52.944742  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:50.760722  115591 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1206 20:00:50.792095  115591 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1206 20:00:50.854264  115591 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 20:00:50.854443  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:50.854549  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=31a3600ce72029d920a55140bbc6d0705e357503 minikube.k8s.io/name=embed-certs-209025 minikube.k8s.io/updated_at=2023_12_06T20_00_50_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:50.894717  115591 ops.go:34] apiserver oom_adj: -16
	I1206 20:00:51.388829  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:51.515185  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:52.132878  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:52.633171  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:53.132766  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:53.632887  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:54.132824  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:52.044531  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:54.538924  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:53.444641  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:53.945468  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:54.444996  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:54.944687  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:55.444757  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:55.945342  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:56.445585  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:56.945489  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:57.445628  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:57.944895  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:54.632961  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:55.132361  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:55.632305  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:56.132439  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:56.632252  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:57.132956  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:57.633210  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:58.133090  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:58.632198  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:59.133167  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:58.445440  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:58.945554  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:59.445298  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:59.945574  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:01:00.179151  115217 kubeadm.go:1088] duration metric: took 14.805687634s to wait for elevateKubeSystemPrivileges.
	I1206 20:01:00.179185  115217 kubeadm.go:406] StartCluster complete in 5m46.007596294s
	I1206 20:01:00.179204  115217 settings.go:142] acquiring lock: {Name:mkfeb988d43ca5824ac2b3af603600358ae0dd6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 20:01:00.179291  115217 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17740-63652/kubeconfig
	I1206 20:01:00.181490  115217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-63652/kubeconfig: {Name:mkb891a2b2c86b4a1b0f4bb8fd4e51233eb9c683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 20:01:00.181810  115217 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 20:01:00.181933  115217 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1206 20:01:00.182031  115217 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-448851"
	I1206 20:01:00.182063  115217 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-448851"
	W1206 20:01:00.182071  115217 addons.go:240] addon storage-provisioner should already be in state true
	I1206 20:01:00.182126  115217 host.go:66] Checking if "old-k8s-version-448851" exists ...
	I1206 20:01:00.182126  115217 config.go:182] Loaded profile config "old-k8s-version-448851": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1206 20:01:00.182180  115217 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-448851"
	I1206 20:01:00.182198  115217 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-448851"
	I1206 20:01:00.182554  115217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:01:00.182572  115217 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-448851"
	I1206 20:01:00.182581  115217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:01:00.182591  115217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:01:00.182596  115217 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-448851"
	W1206 20:01:00.182606  115217 addons.go:240] addon metrics-server should already be in state true
	I1206 20:01:00.182613  115217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:01:00.182735  115217 host.go:66] Checking if "old-k8s-version-448851" exists ...
	I1206 20:01:00.183101  115217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:01:00.183146  115217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:01:00.201450  115217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38847
	I1206 20:01:00.203683  115217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39291
	I1206 20:01:00.203715  115217 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:01:00.203800  115217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40089
	I1206 20:01:00.204181  115217 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:01:00.204341  115217 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:01:00.204386  115217 main.go:141] libmachine: Using API Version  1
	I1206 20:01:00.204409  115217 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:01:00.204863  115217 main.go:141] libmachine: Using API Version  1
	I1206 20:01:00.204877  115217 main.go:141] libmachine: Using API Version  1
	I1206 20:01:00.204884  115217 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:01:00.204895  115217 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:01:00.204950  115217 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:01:00.205328  115217 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:01:00.205333  115217 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:01:00.205489  115217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:01:00.205520  115217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:01:00.205560  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetState
	I1206 20:01:00.205992  115217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:01:00.206064  115217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:01:00.209487  115217 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-448851"
	W1206 20:01:00.209512  115217 addons.go:240] addon default-storageclass should already be in state true
	I1206 20:01:00.209545  115217 host.go:66] Checking if "old-k8s-version-448851" exists ...
	I1206 20:01:00.209987  115217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:01:00.210033  115217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:01:00.227092  115217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42411
	I1206 20:01:00.227961  115217 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:01:00.228610  115217 main.go:141] libmachine: Using API Version  1
	I1206 20:01:00.228633  115217 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:01:00.229107  115217 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:01:00.229342  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetState
	I1206 20:01:00.230638  115217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42917
	I1206 20:01:00.231552  115217 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:01:00.231863  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .DriverName
	I1206 20:01:00.235076  115217 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 20:01:00.232196  115217 main.go:141] libmachine: Using API Version  1
	I1206 20:01:00.232926  115217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44245
	I1206 20:01:00.237258  115217 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:01:00.237284  115217 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 20:01:00.237310  115217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 20:01:00.237333  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHHostname
	I1206 20:01:00.237682  115217 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:01:00.238034  115217 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:01:00.238212  115217 main.go:141] libmachine: Using API Version  1
	I1206 20:01:00.238240  115217 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:01:00.238580  115217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:01:00.238612  115217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:01:00.238977  115217 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:01:00.239198  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetState
	I1206 20:01:00.240631  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .DriverName
	I1206 20:01:00.243107  115217 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1206 20:01:00.241155  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 20:01:00.241833  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHPort
	I1206 20:01:00.245218  115217 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1206 20:01:00.245244  115217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1206 20:01:00.245267  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHHostname
	I1206 20:01:00.245315  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 20:01:00.245333  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 20:01:00.245505  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 20:01:00.245639  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHUsername
	I1206 20:01:00.245737  115217 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/old-k8s-version-448851/id_rsa Username:docker}
	I1206 20:01:00.248492  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 20:01:00.249278  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 20:01:00.249313  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 20:01:00.249597  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHPort
	I1206 20:01:00.249811  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 20:01:00.249971  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHUsername
	I1206 20:01:00.250090  115217 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/old-k8s-version-448851/id_rsa Username:docker}
	I1206 20:01:00.259179  115217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41691
	I1206 20:01:00.259617  115217 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:01:00.260068  115217 main.go:141] libmachine: Using API Version  1
	I1206 20:01:00.260090  115217 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:01:00.260461  115217 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:01:00.260685  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetState
	I1206 20:01:00.262284  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .DriverName
	I1206 20:01:00.262586  115217 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 20:01:00.262604  115217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 20:01:00.262623  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHHostname
	I1206 20:01:00.265183  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 20:01:00.265643  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 20:01:00.265661  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 20:01:00.265890  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHPort
	I1206 20:01:00.266078  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 20:01:00.266240  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHUsername
	I1206 20:01:00.266941  115217 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/old-k8s-version-448851/id_rsa Username:docker}
	I1206 20:01:00.271403  115217 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-448851" context rescaled to 1 replicas
	I1206 20:01:00.271435  115217 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.33 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 20:01:00.273197  115217 out.go:177] * Verifying Kubernetes components...
	I1206 20:00:57.039307  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:59.039639  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:01:00.274454  115217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 20:01:00.597204  115217 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1206 20:01:00.597240  115217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1206 20:01:00.621632  115217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 20:01:00.623444  115217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 20:01:00.630185  115217 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-448851" to be "Ready" ...
	I1206 20:01:00.630280  115217 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 20:01:00.633576  115217 node_ready.go:49] node "old-k8s-version-448851" has status "Ready":"True"
	I1206 20:01:00.633603  115217 node_ready.go:38] duration metric: took 3.385927ms waiting for node "old-k8s-version-448851" to be "Ready" ...
	I1206 20:01:00.633616  115217 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 20:01:00.717216  115217 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1206 20:01:00.717273  115217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1206 20:01:00.735998  115217 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-2nncf" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:00.866186  115217 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 20:01:00.866218  115217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1206 20:01:01.066040  115217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 20:01:01.835164  115217 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.213479825s)
	I1206 20:01:01.835230  115217 main.go:141] libmachine: Making call to close driver server
	I1206 20:01:01.835243  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .Close
	I1206 20:01:01.835558  115217 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:01:01.835605  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | Closing plugin on server side
	I1206 20:01:01.835615  115217 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:01:01.835648  115217 main.go:141] libmachine: Making call to close driver server
	I1206 20:01:01.835663  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .Close
	I1206 20:01:01.835939  115217 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:01:01.835974  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | Closing plugin on server side
	I1206 20:01:01.835983  115217 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:01:01.872799  115217 main.go:141] libmachine: Making call to close driver server
	I1206 20:01:01.872835  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .Close
	I1206 20:01:01.873282  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | Closing plugin on server side
	I1206 20:01:01.873317  115217 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:01:01.873336  115217 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:01:02.258697  115217 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.635202106s)
	I1206 20:01:02.258754  115217 main.go:141] libmachine: Making call to close driver server
	I1206 20:01:02.258769  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .Close
	I1206 20:01:02.258773  115217 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.628450705s)
	I1206 20:01:02.258806  115217 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1206 20:01:02.259113  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | Closing plugin on server side
	I1206 20:01:02.260973  115217 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:01:02.261002  115217 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:01:02.261014  115217 main.go:141] libmachine: Making call to close driver server
	I1206 20:01:02.261025  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .Close
	I1206 20:01:02.261416  115217 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:01:02.261440  115217 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:01:02.261424  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | Closing plugin on server side
	I1206 20:01:02.375593  115217 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.309500554s)
	I1206 20:01:02.375659  115217 main.go:141] libmachine: Making call to close driver server
	I1206 20:01:02.375680  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .Close
	I1206 20:01:02.376064  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | Closing plugin on server side
	I1206 20:01:02.376155  115217 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:01:02.376168  115217 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:01:02.376185  115217 main.go:141] libmachine: Making call to close driver server
	I1206 20:01:02.376193  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .Close
	I1206 20:01:02.376522  115217 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:01:02.376532  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | Closing plugin on server side
	I1206 20:01:02.376543  115217 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:01:02.376559  115217 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-448851"
	I1206 20:01:02.378457  115217 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1206 20:01:02.380099  115217 addons.go:502] enable addons completed in 2.198162438s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1206 20:00:59.632971  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:01:00.133124  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:01:00.633148  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:01:01.132260  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:01:01.632323  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:01:02.132575  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:01:02.632268  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:01:03.132789  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:01:03.633155  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:01:04.132754  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:01:04.321130  115591 kubeadm.go:1088] duration metric: took 13.466729355s to wait for elevateKubeSystemPrivileges.
	I1206 20:01:04.321175  115591 kubeadm.go:406] StartCluster complete in 5m10.1110739s
	I1206 20:01:04.321200  115591 settings.go:142] acquiring lock: {Name:mkfeb988d43ca5824ac2b3af603600358ae0dd6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 20:01:04.321311  115591 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17740-63652/kubeconfig
	I1206 20:01:04.324158  115591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-63652/kubeconfig: {Name:mkb891a2b2c86b4a1b0f4bb8fd4e51233eb9c683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 20:01:04.324502  115591 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 20:01:04.324531  115591 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1206 20:01:04.324609  115591 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-209025"
	I1206 20:01:04.324633  115591 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-209025"
	W1206 20:01:04.324640  115591 addons.go:240] addon storage-provisioner should already be in state true
	I1206 20:01:04.324670  115591 addons.go:69] Setting default-storageclass=true in profile "embed-certs-209025"
	I1206 20:01:04.324699  115591 host.go:66] Checking if "embed-certs-209025" exists ...
	I1206 20:01:04.324702  115591 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-209025"
	I1206 20:01:04.324729  115591 config.go:182] Loaded profile config "embed-certs-209025": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 20:01:04.324799  115591 addons.go:69] Setting metrics-server=true in profile "embed-certs-209025"
	I1206 20:01:04.324813  115591 addons.go:231] Setting addon metrics-server=true in "embed-certs-209025"
	W1206 20:01:04.324820  115591 addons.go:240] addon metrics-server should already be in state true
	I1206 20:01:04.324858  115591 host.go:66] Checking if "embed-certs-209025" exists ...
	I1206 20:01:04.325100  115591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:01:04.325126  115591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:01:04.325127  115591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:01:04.325163  115591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:01:04.325191  115591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:01:04.325213  115591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:01:04.344127  115591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37257
	I1206 20:01:04.344361  115591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36921
	I1206 20:01:04.344866  115591 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:01:04.344978  115591 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:01:04.345615  115591 main.go:141] libmachine: Using API Version  1
	I1206 20:01:04.345635  115591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:01:04.345756  115591 main.go:141] libmachine: Using API Version  1
	I1206 20:01:04.345766  115591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:01:04.346201  115591 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:01:04.346772  115591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:01:04.346821  115591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:01:04.347367  115591 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:01:04.347741  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetState
	I1206 20:01:04.348264  115591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40295
	I1206 20:01:04.348754  115591 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:01:04.349655  115591 main.go:141] libmachine: Using API Version  1
	I1206 20:01:04.349676  115591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:01:04.350156  115591 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:01:04.352233  115591 addons.go:231] Setting addon default-storageclass=true in "embed-certs-209025"
	W1206 20:01:04.352257  115591 addons.go:240] addon default-storageclass should already be in state true
	I1206 20:01:04.352286  115591 host.go:66] Checking if "embed-certs-209025" exists ...
	I1206 20:01:04.352700  115591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:01:04.352734  115591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:01:04.353530  115591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:01:04.353563  115591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:01:04.365607  115591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40665
	I1206 20:01:04.366094  115591 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:01:04.366493  115591 main.go:141] libmachine: Using API Version  1
	I1206 20:01:04.366514  115591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:01:04.366780  115591 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:01:04.366908  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetState
	I1206 20:01:04.368611  115591 main.go:141] libmachine: (embed-certs-209025) Calling .DriverName
	I1206 20:01:04.370655  115591 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 20:01:04.372351  115591 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 20:01:04.372372  115591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33729
	I1206 20:01:04.372376  115591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 20:01:04.372402  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHHostname
	I1206 20:01:04.373021  115591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33983
	I1206 20:01:04.374446  115591 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:01:04.375104  115591 main.go:141] libmachine: Using API Version  1
	I1206 20:01:04.375126  115591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:01:04.375570  115591 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:01:04.375769  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetState
	I1206 20:01:04.376448  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 20:01:04.376851  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 20:01:04.376907  115591 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:01:04.377123  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 20:01:04.377377  115591 main.go:141] libmachine: (embed-certs-209025) Calling .DriverName
	I1206 20:01:04.377531  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHPort
	I1206 20:01:04.379514  115591 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1206 20:01:04.377862  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 20:01:04.378152  115591 main.go:141] libmachine: Using API Version  1
	I1206 20:01:04.381562  115591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:01:04.381682  115591 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1206 20:01:04.381700  115591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1206 20:01:04.381722  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHHostname
	I1206 20:01:04.382619  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHUsername
	I1206 20:01:04.382788  115591 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/embed-certs-209025/id_rsa Username:docker}
	I1206 20:01:04.383576  115591 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:01:04.384146  115591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:01:04.384176  115591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:01:04.386297  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 20:01:04.386684  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 20:01:04.386734  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 20:01:04.387477  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHPort
	I1206 20:01:04.387726  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 20:01:04.387913  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHUsername
	I1206 20:01:04.388055  115591 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/embed-certs-209025/id_rsa Username:docker}
	I1206 20:01:04.401629  115591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41467
	I1206 20:01:04.402214  115591 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:01:04.402804  115591 main.go:141] libmachine: Using API Version  1
	I1206 20:01:04.402826  115591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:01:04.403127  115591 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:01:04.403337  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetState
	I1206 20:01:04.405059  115591 main.go:141] libmachine: (embed-certs-209025) Calling .DriverName
	I1206 20:01:04.405404  115591 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 20:01:04.405427  115591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 20:01:04.405449  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHHostname
	I1206 20:01:04.408608  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 20:01:04.409145  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 20:01:04.409176  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 20:01:04.409443  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHPort
	I1206 20:01:04.409640  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 20:01:04.409860  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHUsername
	I1206 20:01:04.410016  115591 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/embed-certs-209025/id_rsa Username:docker}
	W1206 20:01:04.462788  115591 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "embed-certs-209025" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E1206 20:01:04.462843  115591 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I1206 20:01:04.462872  115591 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.164 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 20:01:04.464916  115591 out.go:177] * Verifying Kubernetes components...
	I1206 20:01:04.466388  115591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 20:01:01.039870  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:01:03.550944  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:01:05.231905  115078 pod_ready.go:81] duration metric: took 4m0.001038985s waiting for pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace to be "Ready" ...
	E1206 20:01:05.231950  115078 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1206 20:01:05.231962  115078 pod_ready.go:38] duration metric: took 4m4.801417566s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 20:01:05.231988  115078 api_server.go:52] waiting for apiserver process to appear ...
	I1206 20:01:05.232081  115078 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 20:01:05.232155  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 20:01:05.294538  115078 cri.go:89] found id: "f5b4ca951aec7e7912fa233772fa1e5a4cceffad95c297ea5b1b968ce835d2eb"
	I1206 20:01:05.294570  115078 cri.go:89] found id: ""
	I1206 20:01:05.294581  115078 logs.go:284] 1 containers: [f5b4ca951aec7e7912fa233772fa1e5a4cceffad95c297ea5b1b968ce835d2eb]
	I1206 20:01:05.294643  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:05.300221  115078 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 20:01:05.300300  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 20:01:05.359655  115078 cri.go:89] found id: "7633ca5afa8aeb44e4c450285edb3b4ca09bd881a87e959894d7740313a7d861"
	I1206 20:01:05.359685  115078 cri.go:89] found id: ""
	I1206 20:01:05.359696  115078 logs.go:284] 1 containers: [7633ca5afa8aeb44e4c450285edb3b4ca09bd881a87e959894d7740313a7d861]
	I1206 20:01:05.359759  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:05.364518  115078 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 20:01:05.364600  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 20:01:05.408448  115078 cri.go:89] found id: "93aee471c37fc8008cd5a0ce741944f249d9cbf7a5cbb62828369850236b0f07"
	I1206 20:01:05.408490  115078 cri.go:89] found id: ""
	I1206 20:01:05.408510  115078 logs.go:284] 1 containers: [93aee471c37fc8008cd5a0ce741944f249d9cbf7a5cbb62828369850236b0f07]
	I1206 20:01:05.408575  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:05.413345  115078 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 20:01:05.413428  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 20:01:05.462932  115078 cri.go:89] found id: "c00065611a1f7003ff8edd2ac629e3c6bbdfa4e1167b1dfd412ee16e5a9d3dcd"
	I1206 20:01:05.462960  115078 cri.go:89] found id: ""
	I1206 20:01:05.462971  115078 logs.go:284] 1 containers: [c00065611a1f7003ff8edd2ac629e3c6bbdfa4e1167b1dfd412ee16e5a9d3dcd]
	I1206 20:01:05.463034  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:05.468632  115078 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 20:01:05.468713  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 20:01:05.519690  115078 cri.go:89] found id: "0da9ad5d9749cba59bd11e3aa5dd332ad76b5bdd0fcde476ce333d4069f0e259"
	I1206 20:01:05.519720  115078 cri.go:89] found id: ""
	I1206 20:01:05.519731  115078 logs.go:284] 1 containers: [0da9ad5d9749cba59bd11e3aa5dd332ad76b5bdd0fcde476ce333d4069f0e259]
	I1206 20:01:05.519789  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:05.525847  115078 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 20:01:05.525933  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 20:01:05.580475  115078 cri.go:89] found id: "43c8e91cea581258f9bf13a71487b9ffadc02ac982f7ff08fa649092e171fa87"
	I1206 20:01:05.580537  115078 cri.go:89] found id: ""
	I1206 20:01:05.580550  115078 logs.go:284] 1 containers: [43c8e91cea581258f9bf13a71487b9ffadc02ac982f7ff08fa649092e171fa87]
	I1206 20:01:05.580623  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:05.585602  115078 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 20:01:05.585688  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 20:01:05.636350  115078 cri.go:89] found id: ""
	I1206 20:01:05.636383  115078 logs.go:284] 0 containers: []
	W1206 20:01:05.636394  115078 logs.go:286] No container was found matching "kindnet"
	I1206 20:01:05.636403  115078 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 20:01:05.636469  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 20:01:05.678819  115078 cri.go:89] found id: "ec1601a49c79cb7e81473ecbb6e4506b82fc3b4942e91392653a6991579c5617"
	I1206 20:01:05.678846  115078 cri.go:89] found id: "d07b3a050ef1964c57b85cc91a3d17646fef6d2298b91d72c007f432c51942c9"
	I1206 20:01:05.678853  115078 cri.go:89] found id: ""
	I1206 20:01:05.678863  115078 logs.go:284] 2 containers: [ec1601a49c79cb7e81473ecbb6e4506b82fc3b4942e91392653a6991579c5617 d07b3a050ef1964c57b85cc91a3d17646fef6d2298b91d72c007f432c51942c9]
	I1206 20:01:05.678929  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:05.683845  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:05.689989  115078 logs.go:123] Gathering logs for kube-scheduler [c00065611a1f7003ff8edd2ac629e3c6bbdfa4e1167b1dfd412ee16e5a9d3dcd] ...
	I1206 20:01:05.690021  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c00065611a1f7003ff8edd2ac629e3c6bbdfa4e1167b1dfd412ee16e5a9d3dcd"
	I1206 20:01:05.745510  115078 logs.go:123] Gathering logs for CRI-O ...
	I1206 20:01:05.745554  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 20:01:04.580869  115591 node_ready.go:35] waiting up to 6m0s for node "embed-certs-209025" to be "Ready" ...
	I1206 20:01:04.580933  115591 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 20:01:04.585219  115591 node_ready.go:49] node "embed-certs-209025" has status "Ready":"True"
	I1206 20:01:04.585267  115591 node_ready.go:38] duration metric: took 4.363508ms waiting for node "embed-certs-209025" to be "Ready" ...
	I1206 20:01:04.585281  115591 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 20:01:04.595166  115591 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-57z8q" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:04.611829  115591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 20:01:04.622127  115591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 20:01:04.628233  115591 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1206 20:01:04.628260  115591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1206 20:01:04.706473  115591 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1206 20:01:04.706498  115591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1206 20:01:04.790827  115591 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 20:01:04.790868  115591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1206 20:01:04.840367  115591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 20:01:06.312054  115591 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.73108071s)
	I1206 20:01:06.312092  115591 start.go:929] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1206 20:01:06.312099  115591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.700233834s)
	I1206 20:01:06.312147  115591 main.go:141] libmachine: Making call to close driver server
	I1206 20:01:06.312162  115591 main.go:141] libmachine: (embed-certs-209025) Calling .Close
	I1206 20:01:06.312503  115591 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:01:06.312519  115591 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:01:06.312531  115591 main.go:141] libmachine: Making call to close driver server
	I1206 20:01:06.312541  115591 main.go:141] libmachine: (embed-certs-209025) Calling .Close
	I1206 20:01:06.312895  115591 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:01:06.312985  115591 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:01:06.312952  115591 main.go:141] libmachine: (embed-certs-209025) DBG | Closing plugin on server side
	I1206 20:01:06.334314  115591 main.go:141] libmachine: Making call to close driver server
	I1206 20:01:06.334343  115591 main.go:141] libmachine: (embed-certs-209025) Calling .Close
	I1206 20:01:06.334719  115591 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:01:06.334742  115591 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:01:06.677046  115591 pod_ready.go:102] pod "coredns-5dd5756b68-57z8q" in "kube-system" namespace has status "Ready":"False"
	I1206 20:01:07.176051  115591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.553877678s)
	I1206 20:01:07.176112  115591 main.go:141] libmachine: Making call to close driver server
	I1206 20:01:07.176124  115591 main.go:141] libmachine: (embed-certs-209025) Calling .Close
	I1206 20:01:07.176520  115591 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:01:07.176551  115591 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:01:07.176570  115591 main.go:141] libmachine: Making call to close driver server
	I1206 20:01:07.176584  115591 main.go:141] libmachine: (embed-certs-209025) Calling .Close
	I1206 20:01:07.176859  115591 main.go:141] libmachine: (embed-certs-209025) DBG | Closing plugin on server side
	I1206 20:01:07.176852  115591 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:01:07.176884  115591 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:01:07.287377  115591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.446934189s)
	I1206 20:01:07.287525  115591 main.go:141] libmachine: Making call to close driver server
	I1206 20:01:07.287586  115591 main.go:141] libmachine: (embed-certs-209025) Calling .Close
	I1206 20:01:07.288055  115591 main.go:141] libmachine: (embed-certs-209025) DBG | Closing plugin on server side
	I1206 20:01:07.288055  115591 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:01:07.288082  115591 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:01:07.288096  115591 main.go:141] libmachine: Making call to close driver server
	I1206 20:01:07.288105  115591 main.go:141] libmachine: (embed-certs-209025) Calling .Close
	I1206 20:01:07.288358  115591 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:01:07.288372  115591 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:01:07.288384  115591 addons.go:467] Verifying addon metrics-server=true in "embed-certs-209025"
	I1206 20:01:07.291120  115591 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1206 20:01:03.100131  115217 pod_ready.go:102] pod "coredns-5644d7b6d9-2nncf" in "kube-system" namespace has status "Ready":"False"
	I1206 20:01:05.107571  115217 pod_ready.go:102] pod "coredns-5644d7b6d9-2nncf" in "kube-system" namespace has status "Ready":"False"
	I1206 20:01:07.599078  115217 pod_ready.go:102] pod "coredns-5644d7b6d9-2nncf" in "kube-system" namespace has status "Ready":"False"
	I1206 20:01:07.292151  115591 addons.go:502] enable addons completed in 2.967619291s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1206 20:01:09.122709  115591 pod_ready.go:102] pod "coredns-5dd5756b68-57z8q" in "kube-system" namespace has status "Ready":"False"
	I1206 20:01:06.258156  115078 logs.go:123] Gathering logs for container status ...
	I1206 20:01:06.258193  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 20:01:06.321049  115078 logs.go:123] Gathering logs for kubelet ...
	I1206 20:01:06.321084  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 20:01:06.376243  115078 logs.go:123] Gathering logs for etcd [7633ca5afa8aeb44e4c450285edb3b4ca09bd881a87e959894d7740313a7d861] ...
	I1206 20:01:06.376281  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7633ca5afa8aeb44e4c450285edb3b4ca09bd881a87e959894d7740313a7d861"
	I1206 20:01:06.441701  115078 logs.go:123] Gathering logs for coredns [93aee471c37fc8008cd5a0ce741944f249d9cbf7a5cbb62828369850236b0f07] ...
	I1206 20:01:06.441742  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93aee471c37fc8008cd5a0ce741944f249d9cbf7a5cbb62828369850236b0f07"
	I1206 20:01:06.493399  115078 logs.go:123] Gathering logs for kube-proxy [0da9ad5d9749cba59bd11e3aa5dd332ad76b5bdd0fcde476ce333d4069f0e259] ...
	I1206 20:01:06.493440  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0da9ad5d9749cba59bd11e3aa5dd332ad76b5bdd0fcde476ce333d4069f0e259"
	I1206 20:01:06.545681  115078 logs.go:123] Gathering logs for storage-provisioner [d07b3a050ef1964c57b85cc91a3d17646fef6d2298b91d72c007f432c51942c9] ...
	I1206 20:01:06.545717  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d07b3a050ef1964c57b85cc91a3d17646fef6d2298b91d72c007f432c51942c9"
	I1206 20:01:06.602830  115078 logs.go:123] Gathering logs for dmesg ...
	I1206 20:01:06.602864  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 20:01:06.618874  115078 logs.go:123] Gathering logs for kube-controller-manager [43c8e91cea581258f9bf13a71487b9ffadc02ac982f7ff08fa649092e171fa87] ...
	I1206 20:01:06.618903  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 43c8e91cea581258f9bf13a71487b9ffadc02ac982f7ff08fa649092e171fa87"
	I1206 20:01:06.694329  115078 logs.go:123] Gathering logs for storage-provisioner [ec1601a49c79cb7e81473ecbb6e4506b82fc3b4942e91392653a6991579c5617] ...
	I1206 20:01:06.694375  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec1601a49c79cb7e81473ecbb6e4506b82fc3b4942e91392653a6991579c5617"
	I1206 20:01:06.748217  115078 logs.go:123] Gathering logs for describe nodes ...
	I1206 20:01:06.748255  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1206 20:01:06.933616  115078 logs.go:123] Gathering logs for kube-apiserver [f5b4ca951aec7e7912fa233772fa1e5a4cceffad95c297ea5b1b968ce835d2eb] ...
	I1206 20:01:06.933655  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5b4ca951aec7e7912fa233772fa1e5a4cceffad95c297ea5b1b968ce835d2eb"
	I1206 20:01:09.511340  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 20:01:09.530228  115078 api_server.go:72] duration metric: took 4m16.464196787s to wait for apiserver process to appear ...
	I1206 20:01:09.530254  115078 api_server.go:88] waiting for apiserver healthz status ...
	I1206 20:01:09.530295  115078 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 20:01:09.530357  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 20:01:09.574265  115078 cri.go:89] found id: "f5b4ca951aec7e7912fa233772fa1e5a4cceffad95c297ea5b1b968ce835d2eb"
	I1206 20:01:09.574301  115078 cri.go:89] found id: ""
	I1206 20:01:09.574313  115078 logs.go:284] 1 containers: [f5b4ca951aec7e7912fa233772fa1e5a4cceffad95c297ea5b1b968ce835d2eb]
	I1206 20:01:09.574377  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:09.579240  115078 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 20:01:09.579310  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 20:01:09.622512  115078 cri.go:89] found id: "7633ca5afa8aeb44e4c450285edb3b4ca09bd881a87e959894d7740313a7d861"
	I1206 20:01:09.622540  115078 cri.go:89] found id: ""
	I1206 20:01:09.622551  115078 logs.go:284] 1 containers: [7633ca5afa8aeb44e4c450285edb3b4ca09bd881a87e959894d7740313a7d861]
	I1206 20:01:09.622619  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:09.627770  115078 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 20:01:09.627847  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 20:01:09.675976  115078 cri.go:89] found id: "93aee471c37fc8008cd5a0ce741944f249d9cbf7a5cbb62828369850236b0f07"
	I1206 20:01:09.676007  115078 cri.go:89] found id: ""
	I1206 20:01:09.676018  115078 logs.go:284] 1 containers: [93aee471c37fc8008cd5a0ce741944f249d9cbf7a5cbb62828369850236b0f07]
	I1206 20:01:09.676082  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:09.680750  115078 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 20:01:09.680824  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 20:01:09.721081  115078 cri.go:89] found id: "c00065611a1f7003ff8edd2ac629e3c6bbdfa4e1167b1dfd412ee16e5a9d3dcd"
	I1206 20:01:09.721108  115078 cri.go:89] found id: ""
	I1206 20:01:09.721119  115078 logs.go:284] 1 containers: [c00065611a1f7003ff8edd2ac629e3c6bbdfa4e1167b1dfd412ee16e5a9d3dcd]
	I1206 20:01:09.721181  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:09.725501  115078 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 20:01:09.725568  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 20:01:09.777674  115078 cri.go:89] found id: "0da9ad5d9749cba59bd11e3aa5dd332ad76b5bdd0fcde476ce333d4069f0e259"
	I1206 20:01:09.777700  115078 cri.go:89] found id: ""
	I1206 20:01:09.777709  115078 logs.go:284] 1 containers: [0da9ad5d9749cba59bd11e3aa5dd332ad76b5bdd0fcde476ce333d4069f0e259]
	I1206 20:01:09.777767  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:09.782475  115078 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 20:01:09.782558  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 20:01:09.833889  115078 cri.go:89] found id: "43c8e91cea581258f9bf13a71487b9ffadc02ac982f7ff08fa649092e171fa87"
	I1206 20:01:09.833916  115078 cri.go:89] found id: ""
	I1206 20:01:09.833926  115078 logs.go:284] 1 containers: [43c8e91cea581258f9bf13a71487b9ffadc02ac982f7ff08fa649092e171fa87]
	I1206 20:01:09.833985  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:09.838897  115078 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 20:01:09.838977  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 20:01:09.880892  115078 cri.go:89] found id: ""
	I1206 20:01:09.880923  115078 logs.go:284] 0 containers: []
	W1206 20:01:09.880934  115078 logs.go:286] No container was found matching "kindnet"
	I1206 20:01:09.880943  115078 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 20:01:09.881011  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 20:01:09.924025  115078 cri.go:89] found id: "ec1601a49c79cb7e81473ecbb6e4506b82fc3b4942e91392653a6991579c5617"
	I1206 20:01:09.924058  115078 cri.go:89] found id: "d07b3a050ef1964c57b85cc91a3d17646fef6d2298b91d72c007f432c51942c9"
	I1206 20:01:09.924065  115078 cri.go:89] found id: ""
	I1206 20:01:09.924075  115078 logs.go:284] 2 containers: [ec1601a49c79cb7e81473ecbb6e4506b82fc3b4942e91392653a6991579c5617 d07b3a050ef1964c57b85cc91a3d17646fef6d2298b91d72c007f432c51942c9]
	I1206 20:01:09.924142  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:09.928667  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:09.933112  115078 logs.go:123] Gathering logs for dmesg ...
	I1206 20:01:09.933134  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 20:01:09.949212  115078 logs.go:123] Gathering logs for etcd [7633ca5afa8aeb44e4c450285edb3b4ca09bd881a87e959894d7740313a7d861] ...
	I1206 20:01:09.949254  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7633ca5afa8aeb44e4c450285edb3b4ca09bd881a87e959894d7740313a7d861"
	I1206 20:01:09.996227  115078 logs.go:123] Gathering logs for container status ...
	I1206 20:01:09.996261  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 20:01:10.046607  115078 logs.go:123] Gathering logs for kubelet ...
	I1206 20:01:10.046645  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 20:01:10.102171  115078 logs.go:123] Gathering logs for kube-controller-manager [43c8e91cea581258f9bf13a71487b9ffadc02ac982f7ff08fa649092e171fa87] ...
	I1206 20:01:10.102214  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 43c8e91cea581258f9bf13a71487b9ffadc02ac982f7ff08fa649092e171fa87"
	I1206 20:01:10.160600  115078 logs.go:123] Gathering logs for storage-provisioner [ec1601a49c79cb7e81473ecbb6e4506b82fc3b4942e91392653a6991579c5617] ...
	I1206 20:01:10.160641  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec1601a49c79cb7e81473ecbb6e4506b82fc3b4942e91392653a6991579c5617"
	I1206 20:01:10.203673  115078 logs.go:123] Gathering logs for CRI-O ...
	I1206 20:01:10.203709  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 20:01:10.681783  115078 logs.go:123] Gathering logs for describe nodes ...
	I1206 20:01:10.681824  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1206 20:01:10.813061  115078 logs.go:123] Gathering logs for kube-proxy [0da9ad5d9749cba59bd11e3aa5dd332ad76b5bdd0fcde476ce333d4069f0e259] ...
	I1206 20:01:10.813102  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0da9ad5d9749cba59bd11e3aa5dd332ad76b5bdd0fcde476ce333d4069f0e259"
	I1206 20:01:10.857895  115078 logs.go:123] Gathering logs for storage-provisioner [d07b3a050ef1964c57b85cc91a3d17646fef6d2298b91d72c007f432c51942c9] ...
	I1206 20:01:10.857930  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d07b3a050ef1964c57b85cc91a3d17646fef6d2298b91d72c007f432c51942c9"
	I1206 20:01:10.904589  115078 logs.go:123] Gathering logs for kube-apiserver [f5b4ca951aec7e7912fa233772fa1e5a4cceffad95c297ea5b1b968ce835d2eb] ...
	I1206 20:01:10.904625  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5b4ca951aec7e7912fa233772fa1e5a4cceffad95c297ea5b1b968ce835d2eb"
	I1206 20:01:10.957570  115078 logs.go:123] Gathering logs for kube-scheduler [c00065611a1f7003ff8edd2ac629e3c6bbdfa4e1167b1dfd412ee16e5a9d3dcd] ...
	I1206 20:01:10.957608  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c00065611a1f7003ff8edd2ac629e3c6bbdfa4e1167b1dfd412ee16e5a9d3dcd"
	I1206 20:01:09.624997  115591 pod_ready.go:92] pod "coredns-5dd5756b68-57z8q" in "kube-system" namespace has status "Ready":"True"
	I1206 20:01:09.625025  115591 pod_ready.go:81] duration metric: took 5.029829059s waiting for pod "coredns-5dd5756b68-57z8q" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:09.625038  115591 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-8lsns" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:09.632534  115591 pod_ready.go:92] pod "coredns-5dd5756b68-8lsns" in "kube-system" namespace has status "Ready":"True"
	I1206 20:01:09.632561  115591 pod_ready.go:81] duration metric: took 7.514952ms waiting for pod "coredns-5dd5756b68-8lsns" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:09.632574  115591 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:09.642077  115591 pod_ready.go:92] pod "etcd-embed-certs-209025" in "kube-system" namespace has status "Ready":"True"
	I1206 20:01:09.642107  115591 pod_ready.go:81] duration metric: took 9.52505ms waiting for pod "etcd-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:09.642121  115591 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:09.648636  115591 pod_ready.go:92] pod "kube-apiserver-embed-certs-209025" in "kube-system" namespace has status "Ready":"True"
	I1206 20:01:09.648658  115591 pod_ready.go:81] duration metric: took 6.530394ms waiting for pod "kube-apiserver-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:09.648667  115591 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:09.656534  115591 pod_ready.go:92] pod "kube-controller-manager-embed-certs-209025" in "kube-system" namespace has status "Ready":"True"
	I1206 20:01:09.656561  115591 pod_ready.go:81] duration metric: took 7.887248ms waiting for pod "kube-controller-manager-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:09.656573  115591 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nf2cw" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:10.019281  115591 pod_ready.go:92] pod "kube-proxy-nf2cw" in "kube-system" namespace has status "Ready":"True"
	I1206 20:01:10.019310  115591 pod_ready.go:81] duration metric: took 362.727602ms waiting for pod "kube-proxy-nf2cw" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:10.019323  115591 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:10.419938  115591 pod_ready.go:92] pod "kube-scheduler-embed-certs-209025" in "kube-system" namespace has status "Ready":"True"
	I1206 20:01:10.419971  115591 pod_ready.go:81] duration metric: took 400.640145ms waiting for pod "kube-scheduler-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:10.419982  115591 pod_ready.go:38] duration metric: took 5.834689614s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 20:01:10.420000  115591 api_server.go:52] waiting for apiserver process to appear ...
	I1206 20:01:10.420062  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 20:01:10.436691  115591 api_server.go:72] duration metric: took 5.973781556s to wait for apiserver process to appear ...
	I1206 20:01:10.436723  115591 api_server.go:88] waiting for apiserver healthz status ...
	I1206 20:01:10.436746  115591 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8443/healthz ...
	I1206 20:01:10.442876  115591 api_server.go:279] https://192.168.50.164:8443/healthz returned 200:
	ok
	I1206 20:01:10.444774  115591 api_server.go:141] control plane version: v1.28.4
	I1206 20:01:10.444798  115591 api_server.go:131] duration metric: took 8.067787ms to wait for apiserver health ...
	I1206 20:01:10.444808  115591 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 20:01:10.624219  115591 system_pods.go:59] 9 kube-system pods found
	I1206 20:01:10.624251  115591 system_pods.go:61] "coredns-5dd5756b68-57z8q" [24c81a49-d80e-47df-86d2-0056ccc25858] Running
	I1206 20:01:10.624256  115591 system_pods.go:61] "coredns-5dd5756b68-8lsns" [14c5f16e-0c30-4602-b772-c6e0c8a577a8] Running
	I1206 20:01:10.624260  115591 system_pods.go:61] "etcd-embed-certs-209025" [e352dba2-c22b-4b21-9cb7-d641d29307a0] Running
	I1206 20:01:10.624264  115591 system_pods.go:61] "kube-apiserver-embed-certs-209025" [b4bfe0d1-0f1f-4e5e-96a4-94ec19cc1ab4] Running
	I1206 20:01:10.624268  115591 system_pods.go:61] "kube-controller-manager-embed-certs-209025" [1e9819fc-0187-4410-97f5-a517fb6b6595] Running
	I1206 20:01:10.624272  115591 system_pods.go:61] "kube-proxy-nf2cw" [5e49b3f8-7eee-4c04-ae22-75ccd216bb27] Running
	I1206 20:01:10.624275  115591 system_pods.go:61] "kube-scheduler-embed-certs-209025" [cc5d4d6f-515d-48b9-8d6f-83c33b0fa037] Running
	I1206 20:01:10.624282  115591 system_pods.go:61] "metrics-server-57f55c9bc5-5qxxj" [4eaddb4b-aec0-4cc7-b467-bb882bcba8a0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:10.624286  115591 system_pods.go:61] "storage-provisioner" [2417fc35-04fd-4dcf-9d16-2649a0d3bb3b] Running
	I1206 20:01:10.624296  115591 system_pods.go:74] duration metric: took 179.481721ms to wait for pod list to return data ...
	I1206 20:01:10.624306  115591 default_sa.go:34] waiting for default service account to be created ...
	I1206 20:01:10.818715  115591 default_sa.go:45] found service account: "default"
	I1206 20:01:10.818741  115591 default_sa.go:55] duration metric: took 194.428895ms for default service account to be created ...
	I1206 20:01:10.818750  115591 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 20:01:11.022686  115591 system_pods.go:86] 9 kube-system pods found
	I1206 20:01:11.022713  115591 system_pods.go:89] "coredns-5dd5756b68-57z8q" [24c81a49-d80e-47df-86d2-0056ccc25858] Running
	I1206 20:01:11.022718  115591 system_pods.go:89] "coredns-5dd5756b68-8lsns" [14c5f16e-0c30-4602-b772-c6e0c8a577a8] Running
	I1206 20:01:11.022722  115591 system_pods.go:89] "etcd-embed-certs-209025" [e352dba2-c22b-4b21-9cb7-d641d29307a0] Running
	I1206 20:01:11.022726  115591 system_pods.go:89] "kube-apiserver-embed-certs-209025" [b4bfe0d1-0f1f-4e5e-96a4-94ec19cc1ab4] Running
	I1206 20:01:11.022730  115591 system_pods.go:89] "kube-controller-manager-embed-certs-209025" [1e9819fc-0187-4410-97f5-a517fb6b6595] Running
	I1206 20:01:11.022734  115591 system_pods.go:89] "kube-proxy-nf2cw" [5e49b3f8-7eee-4c04-ae22-75ccd216bb27] Running
	I1206 20:01:11.022738  115591 system_pods.go:89] "kube-scheduler-embed-certs-209025" [cc5d4d6f-515d-48b9-8d6f-83c33b0fa037] Running
	I1206 20:01:11.022744  115591 system_pods.go:89] "metrics-server-57f55c9bc5-5qxxj" [4eaddb4b-aec0-4cc7-b467-bb882bcba8a0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:11.022750  115591 system_pods.go:89] "storage-provisioner" [2417fc35-04fd-4dcf-9d16-2649a0d3bb3b] Running
	I1206 20:01:11.022762  115591 system_pods.go:126] duration metric: took 204.004835ms to wait for k8s-apps to be running ...
	I1206 20:01:11.022774  115591 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 20:01:11.022824  115591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 20:01:11.041212  115591 system_svc.go:56] duration metric: took 18.424469ms WaitForService to wait for kubelet.
	I1206 20:01:11.041256  115591 kubeadm.go:581] duration metric: took 6.578354937s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1206 20:01:11.041291  115591 node_conditions.go:102] verifying NodePressure condition ...
	I1206 20:01:11.219045  115591 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1206 20:01:11.219079  115591 node_conditions.go:123] node cpu capacity is 2
	I1206 20:01:11.219094  115591 node_conditions.go:105] duration metric: took 177.793737ms to run NodePressure ...
	I1206 20:01:11.219107  115591 start.go:228] waiting for startup goroutines ...
	I1206 20:01:11.219113  115591 start.go:233] waiting for cluster config update ...
	I1206 20:01:11.219125  115591 start.go:242] writing updated cluster config ...
	I1206 20:01:11.219482  115591 ssh_runner.go:195] Run: rm -f paused
	I1206 20:01:11.275863  115591 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1206 20:01:11.278074  115591 out.go:177] * Done! kubectl is now configured to use "embed-certs-209025" cluster and "default" namespace by default
	I1206 20:01:09.099590  115217 pod_ready.go:92] pod "coredns-5644d7b6d9-2nncf" in "kube-system" namespace has status "Ready":"True"
	I1206 20:01:09.099616  115217 pod_ready.go:81] duration metric: took 8.363590309s waiting for pod "coredns-5644d7b6d9-2nncf" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:09.099626  115217 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-f627j" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:09.103452  115217 pod_ready.go:97] error getting pod "coredns-5644d7b6d9-f627j" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-f627j" not found
	I1206 20:01:09.103485  115217 pod_ready.go:81] duration metric: took 3.845902ms waiting for pod "coredns-5644d7b6d9-f627j" in "kube-system" namespace to be "Ready" ...
	E1206 20:01:09.103499  115217 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5644d7b6d9-f627j" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-f627j" not found
	I1206 20:01:09.103507  115217 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wvqmw" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:09.110700  115217 pod_ready.go:92] pod "kube-proxy-wvqmw" in "kube-system" namespace has status "Ready":"True"
	I1206 20:01:09.110721  115217 pod_ready.go:81] duration metric: took 7.207091ms waiting for pod "kube-proxy-wvqmw" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:09.110729  115217 pod_ready.go:38] duration metric: took 8.477100108s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 20:01:09.110744  115217 api_server.go:52] waiting for apiserver process to appear ...
	I1206 20:01:09.110791  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 20:01:09.127244  115217 api_server.go:72] duration metric: took 8.855777965s to wait for apiserver process to appear ...
	I1206 20:01:09.127272  115217 api_server.go:88] waiting for apiserver healthz status ...
	I1206 20:01:09.127290  115217 api_server.go:253] Checking apiserver healthz at https://192.168.61.33:8443/healthz ...
	I1206 20:01:09.134411  115217 api_server.go:279] https://192.168.61.33:8443/healthz returned 200:
	ok
	I1206 20:01:09.135553  115217 api_server.go:141] control plane version: v1.16.0
	I1206 20:01:09.135578  115217 api_server.go:131] duration metric: took 8.298936ms to wait for apiserver health ...
	I1206 20:01:09.135589  115217 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 20:01:09.140145  115217 system_pods.go:59] 4 kube-system pods found
	I1206 20:01:09.140167  115217 system_pods.go:61] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:09.140172  115217 system_pods.go:61] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:09.140178  115217 system_pods.go:61] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:09.140183  115217 system_pods.go:61] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:09.140191  115217 system_pods.go:74] duration metric: took 4.595695ms to wait for pod list to return data ...
	I1206 20:01:09.140198  115217 default_sa.go:34] waiting for default service account to be created ...
	I1206 20:01:09.142852  115217 default_sa.go:45] found service account: "default"
	I1206 20:01:09.142877  115217 default_sa.go:55] duration metric: took 2.67139ms for default service account to be created ...
	I1206 20:01:09.142888  115217 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 20:01:09.145800  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:09.145822  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:09.145827  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:09.145833  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:09.145838  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:09.145856  115217 retry.go:31] will retry after 199.361191ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:09.351430  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:09.351475  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:09.351485  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:09.351497  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:09.351504  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:09.351529  115217 retry.go:31] will retry after 239.084983ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:09.595441  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:09.595479  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:09.595487  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:09.595498  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:09.595506  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:09.595528  115217 retry.go:31] will retry after 380.909676ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:09.982061  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:09.982088  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:09.982093  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:09.982101  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:09.982115  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:09.982133  115217 retry.go:31] will retry after 451.472574ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:10.439270  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:10.439303  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:10.439311  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:10.439321  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:10.439328  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:10.439350  115217 retry.go:31] will retry after 654.845182ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:11.101088  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:11.101129  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:11.101137  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:11.101147  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:11.101155  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:11.101178  115217 retry.go:31] will retry after 650.939663ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:11.757024  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:11.757053  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:11.757058  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:11.757065  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:11.757070  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:11.757088  115217 retry.go:31] will retry after 828.555469ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:12.591156  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:12.591193  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:12.591209  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:12.591220  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:12.591227  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:12.591254  115217 retry.go:31] will retry after 1.26518336s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:11.000472  115078 logs.go:123] Gathering logs for coredns [93aee471c37fc8008cd5a0ce741944f249d9cbf7a5cbb62828369850236b0f07] ...
	I1206 20:01:11.000505  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93aee471c37fc8008cd5a0ce741944f249d9cbf7a5cbb62828369850236b0f07"
	I1206 20:01:13.545345  115078 api_server.go:253] Checking apiserver healthz at https://192.168.39.5:8443/healthz ...
	I1206 20:01:13.551262  115078 api_server.go:279] https://192.168.39.5:8443/healthz returned 200:
	ok
	I1206 20:01:13.553129  115078 api_server.go:141] control plane version: v1.29.0-rc.1
	I1206 20:01:13.553161  115078 api_server.go:131] duration metric: took 4.022898619s to wait for apiserver health ...
	I1206 20:01:13.553173  115078 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 20:01:13.553204  115078 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 20:01:13.553287  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 20:01:13.619861  115078 cri.go:89] found id: "f5b4ca951aec7e7912fa233772fa1e5a4cceffad95c297ea5b1b968ce835d2eb"
	I1206 20:01:13.619892  115078 cri.go:89] found id: ""
	I1206 20:01:13.619903  115078 logs.go:284] 1 containers: [f5b4ca951aec7e7912fa233772fa1e5a4cceffad95c297ea5b1b968ce835d2eb]
	I1206 20:01:13.619994  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:13.625028  115078 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 20:01:13.625099  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 20:01:13.667275  115078 cri.go:89] found id: "7633ca5afa8aeb44e4c450285edb3b4ca09bd881a87e959894d7740313a7d861"
	I1206 20:01:13.667300  115078 cri.go:89] found id: ""
	I1206 20:01:13.667309  115078 logs.go:284] 1 containers: [7633ca5afa8aeb44e4c450285edb3b4ca09bd881a87e959894d7740313a7d861]
	I1206 20:01:13.667378  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:13.671673  115078 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 20:01:13.671740  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 20:01:13.713319  115078 cri.go:89] found id: "93aee471c37fc8008cd5a0ce741944f249d9cbf7a5cbb62828369850236b0f07"
	I1206 20:01:13.713351  115078 cri.go:89] found id: ""
	I1206 20:01:13.713361  115078 logs.go:284] 1 containers: [93aee471c37fc8008cd5a0ce741944f249d9cbf7a5cbb62828369850236b0f07]
	I1206 20:01:13.713428  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:13.718155  115078 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 20:01:13.718219  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 20:01:13.758383  115078 cri.go:89] found id: "c00065611a1f7003ff8edd2ac629e3c6bbdfa4e1167b1dfd412ee16e5a9d3dcd"
	I1206 20:01:13.758414  115078 cri.go:89] found id: ""
	I1206 20:01:13.758424  115078 logs.go:284] 1 containers: [c00065611a1f7003ff8edd2ac629e3c6bbdfa4e1167b1dfd412ee16e5a9d3dcd]
	I1206 20:01:13.758488  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:13.762747  115078 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 20:01:13.762826  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 20:01:13.803602  115078 cri.go:89] found id: "0da9ad5d9749cba59bd11e3aa5dd332ad76b5bdd0fcde476ce333d4069f0e259"
	I1206 20:01:13.803627  115078 cri.go:89] found id: ""
	I1206 20:01:13.803635  115078 logs.go:284] 1 containers: [0da9ad5d9749cba59bd11e3aa5dd332ad76b5bdd0fcde476ce333d4069f0e259]
	I1206 20:01:13.803685  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:13.808083  115078 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 20:01:13.808160  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 20:01:13.852504  115078 cri.go:89] found id: "43c8e91cea581258f9bf13a71487b9ffadc02ac982f7ff08fa649092e171fa87"
	I1206 20:01:13.852531  115078 cri.go:89] found id: ""
	I1206 20:01:13.852539  115078 logs.go:284] 1 containers: [43c8e91cea581258f9bf13a71487b9ffadc02ac982f7ff08fa649092e171fa87]
	I1206 20:01:13.852598  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:13.857213  115078 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 20:01:13.857322  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 20:01:13.896981  115078 cri.go:89] found id: ""
	I1206 20:01:13.897023  115078 logs.go:284] 0 containers: []
	W1206 20:01:13.897035  115078 logs.go:286] No container was found matching "kindnet"
	I1206 20:01:13.897044  115078 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 20:01:13.897110  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 20:01:13.940969  115078 cri.go:89] found id: "ec1601a49c79cb7e81473ecbb6e4506b82fc3b4942e91392653a6991579c5617"
	I1206 20:01:13.940996  115078 cri.go:89] found id: "d07b3a050ef1964c57b85cc91a3d17646fef6d2298b91d72c007f432c51942c9"
	I1206 20:01:13.941004  115078 cri.go:89] found id: ""
	I1206 20:01:13.941013  115078 logs.go:284] 2 containers: [ec1601a49c79cb7e81473ecbb6e4506b82fc3b4942e91392653a6991579c5617 d07b3a050ef1964c57b85cc91a3d17646fef6d2298b91d72c007f432c51942c9]
	I1206 20:01:13.941075  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:13.945508  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:13.949933  115078 logs.go:123] Gathering logs for kube-scheduler [c00065611a1f7003ff8edd2ac629e3c6bbdfa4e1167b1dfd412ee16e5a9d3dcd] ...
	I1206 20:01:13.949961  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c00065611a1f7003ff8edd2ac629e3c6bbdfa4e1167b1dfd412ee16e5a9d3dcd"
	I1206 20:01:13.986034  115078 logs.go:123] Gathering logs for kube-controller-manager [43c8e91cea581258f9bf13a71487b9ffadc02ac982f7ff08fa649092e171fa87] ...
	I1206 20:01:13.986065  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 43c8e91cea581258f9bf13a71487b9ffadc02ac982f7ff08fa649092e171fa87"
	I1206 20:01:14.045155  115078 logs.go:123] Gathering logs for storage-provisioner [ec1601a49c79cb7e81473ecbb6e4506b82fc3b4942e91392653a6991579c5617] ...
	I1206 20:01:14.045197  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec1601a49c79cb7e81473ecbb6e4506b82fc3b4942e91392653a6991579c5617"
	I1206 20:01:14.091205  115078 logs.go:123] Gathering logs for storage-provisioner [d07b3a050ef1964c57b85cc91a3d17646fef6d2298b91d72c007f432c51942c9] ...
	I1206 20:01:14.091240  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d07b3a050ef1964c57b85cc91a3d17646fef6d2298b91d72c007f432c51942c9"
	I1206 20:01:14.130184  115078 logs.go:123] Gathering logs for container status ...
	I1206 20:01:14.130221  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 20:01:14.176981  115078 logs.go:123] Gathering logs for dmesg ...
	I1206 20:01:14.177024  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 20:01:14.191755  115078 logs.go:123] Gathering logs for describe nodes ...
	I1206 20:01:14.191796  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1206 20:01:14.316375  115078 logs.go:123] Gathering logs for etcd [7633ca5afa8aeb44e4c450285edb3b4ca09bd881a87e959894d7740313a7d861] ...
	I1206 20:01:14.316413  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7633ca5afa8aeb44e4c450285edb3b4ca09bd881a87e959894d7740313a7d861"
	I1206 20:01:14.359700  115078 logs.go:123] Gathering logs for kubelet ...
	I1206 20:01:14.359746  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 20:01:14.415906  115078 logs.go:123] Gathering logs for kube-apiserver [f5b4ca951aec7e7912fa233772fa1e5a4cceffad95c297ea5b1b968ce835d2eb] ...
	I1206 20:01:14.415952  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5b4ca951aec7e7912fa233772fa1e5a4cceffad95c297ea5b1b968ce835d2eb"
	I1206 20:01:14.471453  115078 logs.go:123] Gathering logs for kube-proxy [0da9ad5d9749cba59bd11e3aa5dd332ad76b5bdd0fcde476ce333d4069f0e259] ...
	I1206 20:01:14.471496  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0da9ad5d9749cba59bd11e3aa5dd332ad76b5bdd0fcde476ce333d4069f0e259"
	I1206 20:01:14.520012  115078 logs.go:123] Gathering logs for coredns [93aee471c37fc8008cd5a0ce741944f249d9cbf7a5cbb62828369850236b0f07] ...
	I1206 20:01:14.520051  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93aee471c37fc8008cd5a0ce741944f249d9cbf7a5cbb62828369850236b0f07"
	I1206 20:01:14.567445  115078 logs.go:123] Gathering logs for CRI-O ...
	I1206 20:01:14.567482  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 20:01:17.434636  115078 system_pods.go:59] 8 kube-system pods found
	I1206 20:01:17.434671  115078 system_pods.go:61] "coredns-76f75df574-h9pkz" [05501356-bf9b-4a99-a1b9-40d0caef38db] Running
	I1206 20:01:17.434676  115078 system_pods.go:61] "etcd-no-preload-989559" [6c1cb748-a6a8-4583-b8fd-adf37e05b771] Running
	I1206 20:01:17.434680  115078 system_pods.go:61] "kube-apiserver-no-preload-989559" [51d8b7c6-0cef-4832-96b2-5040c0725310] Running
	I1206 20:01:17.434685  115078 system_pods.go:61] "kube-controller-manager-no-preload-989559" [cc8dfb88-9990-488f-9150-5c643143dcf1] Running
	I1206 20:01:17.434688  115078 system_pods.go:61] "kube-proxy-zgqvt" [550b2491-c14f-47c4-82d5-1301fa351305] Running
	I1206 20:01:17.434692  115078 system_pods.go:61] "kube-scheduler-no-preload-989559" [53a5031e-51aa-4867-88ff-1c7972a0cfa7] Running
	I1206 20:01:17.434700  115078 system_pods.go:61] "metrics-server-57f55c9bc5-vz7qc" [97c1bcd2-eabc-4029-bb02-5bbfd4d96c0f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:17.434706  115078 system_pods.go:61] "storage-provisioner" [c4d98de3-12ec-47f6-a6a6-f1dc61b479be] Running
	I1206 20:01:17.434714  115078 system_pods.go:74] duration metric: took 3.881535405s to wait for pod list to return data ...
	I1206 20:01:17.434724  115078 default_sa.go:34] waiting for default service account to be created ...
	I1206 20:01:17.437744  115078 default_sa.go:45] found service account: "default"
	I1206 20:01:17.437770  115078 default_sa.go:55] duration metric: took 3.038532ms for default service account to be created ...
	I1206 20:01:17.437780  115078 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 20:01:17.444539  115078 system_pods.go:86] 8 kube-system pods found
	I1206 20:01:17.444567  115078 system_pods.go:89] "coredns-76f75df574-h9pkz" [05501356-bf9b-4a99-a1b9-40d0caef38db] Running
	I1206 20:01:17.444572  115078 system_pods.go:89] "etcd-no-preload-989559" [6c1cb748-a6a8-4583-b8fd-adf37e05b771] Running
	I1206 20:01:17.444577  115078 system_pods.go:89] "kube-apiserver-no-preload-989559" [51d8b7c6-0cef-4832-96b2-5040c0725310] Running
	I1206 20:01:17.444583  115078 system_pods.go:89] "kube-controller-manager-no-preload-989559" [cc8dfb88-9990-488f-9150-5c643143dcf1] Running
	I1206 20:01:17.444587  115078 system_pods.go:89] "kube-proxy-zgqvt" [550b2491-c14f-47c4-82d5-1301fa351305] Running
	I1206 20:01:17.444592  115078 system_pods.go:89] "kube-scheduler-no-preload-989559" [53a5031e-51aa-4867-88ff-1c7972a0cfa7] Running
	I1206 20:01:17.444602  115078 system_pods.go:89] "metrics-server-57f55c9bc5-vz7qc" [97c1bcd2-eabc-4029-bb02-5bbfd4d96c0f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:17.444608  115078 system_pods.go:89] "storage-provisioner" [c4d98de3-12ec-47f6-a6a6-f1dc61b479be] Running
	I1206 20:01:17.444619  115078 system_pods.go:126] duration metric: took 6.832576ms to wait for k8s-apps to be running ...
	I1206 20:01:17.444629  115078 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 20:01:17.444687  115078 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 20:01:17.464821  115078 system_svc.go:56] duration metric: took 20.181153ms WaitForService to wait for kubelet.
	I1206 20:01:17.464866  115078 kubeadm.go:581] duration metric: took 4m24.398841426s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1206 20:01:17.464894  115078 node_conditions.go:102] verifying NodePressure condition ...
	I1206 20:01:17.467938  115078 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1206 20:01:17.467964  115078 node_conditions.go:123] node cpu capacity is 2
	I1206 20:01:17.467975  115078 node_conditions.go:105] duration metric: took 3.076458ms to run NodePressure ...
	I1206 20:01:17.467988  115078 start.go:228] waiting for startup goroutines ...
	I1206 20:01:17.467994  115078 start.go:233] waiting for cluster config update ...
	I1206 20:01:17.468004  115078 start.go:242] writing updated cluster config ...
	I1206 20:01:17.468290  115078 ssh_runner.go:195] Run: rm -f paused
	I1206 20:01:17.523451  115078 start.go:600] kubectl: 1.28.4, cluster: 1.29.0-rc.1 (minor skew: 1)
	I1206 20:01:17.525609  115078 out.go:177] * Done! kubectl is now configured to use "no-preload-989559" cluster and "default" namespace by default
	I1206 20:01:13.862479  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:13.862506  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:13.862512  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:13.862519  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:13.862523  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:13.862542  115217 retry.go:31] will retry after 1.299046526s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:15.166601  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:15.166630  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:15.166635  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:15.166642  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:15.166647  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:15.166667  115217 retry.go:31] will retry after 1.832151574s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:17.005707  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:17.005739  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:17.005746  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:17.005754  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:17.005774  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:17.005797  115217 retry.go:31] will retry after 1.796371959s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:18.808729  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:18.808757  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:18.808763  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:18.808770  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:18.808775  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:18.808792  115217 retry.go:31] will retry after 2.814845209s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:21.630762  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:21.630791  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:21.630796  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:21.630811  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:21.630816  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:21.630834  115217 retry.go:31] will retry after 2.866148194s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:24.502168  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:24.502198  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:24.502203  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:24.502211  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:24.502215  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:24.502233  115217 retry.go:31] will retry after 3.777894628s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:28.284776  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:28.284812  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:28.284818  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:28.284825  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:28.284829  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:28.284847  115217 retry.go:31] will retry after 4.837538668s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:33.127301  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:33.127330  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:33.127336  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:33.127344  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:33.127349  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:33.127370  115217 retry.go:31] will retry after 6.833662344s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:39.966417  115217 system_pods.go:86] 5 kube-system pods found
	I1206 20:01:39.966450  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:39.966458  115217 system_pods.go:89] "kube-apiserver-old-k8s-version-448851" [ecace4aa-bc86-43ed-9067-365504abbf70] Pending
	I1206 20:01:39.966465  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:39.966476  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:39.966483  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:39.966504  115217 retry.go:31] will retry after 9.204033337s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:49.176395  115217 system_pods.go:86] 8 kube-system pods found
	I1206 20:01:49.176434  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:49.176442  115217 system_pods.go:89] "etcd-old-k8s-version-448851" [91d55b2e-4361-4615-a99c-d1338c427d81] Pending
	I1206 20:01:49.176450  115217 system_pods.go:89] "kube-apiserver-old-k8s-version-448851" [ecace4aa-bc86-43ed-9067-365504abbf70] Running
	I1206 20:01:49.176457  115217 system_pods.go:89] "kube-controller-manager-old-k8s-version-448851" [cf55eb16-4a36-4d70-bb22-4cab5f9f7358] Running
	I1206 20:01:49.176462  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:49.176469  115217 system_pods.go:89] "kube-scheduler-old-k8s-version-448851" [373cb698-190a-480d-ac74-4ea990474ad1] Pending
	I1206 20:01:49.176479  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:49.176487  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:49.176511  115217 retry.go:31] will retry after 9.456016194s: missing components: etcd, kube-scheduler
	I1206 20:01:58.638807  115217 system_pods.go:86] 8 kube-system pods found
	I1206 20:01:58.638837  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:58.638842  115217 system_pods.go:89] "etcd-old-k8s-version-448851" [91d55b2e-4361-4615-a99c-d1338c427d81] Running
	I1206 20:01:58.638847  115217 system_pods.go:89] "kube-apiserver-old-k8s-version-448851" [ecace4aa-bc86-43ed-9067-365504abbf70] Running
	I1206 20:01:58.638851  115217 system_pods.go:89] "kube-controller-manager-old-k8s-version-448851" [cf55eb16-4a36-4d70-bb22-4cab5f9f7358] Running
	I1206 20:01:58.638855  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:58.638861  115217 system_pods.go:89] "kube-scheduler-old-k8s-version-448851" [373cb698-190a-480d-ac74-4ea990474ad1] Running
	I1206 20:01:58.638867  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:58.638872  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:58.638879  115217 system_pods.go:126] duration metric: took 49.495986809s to wait for k8s-apps to be running ...
	I1206 20:01:58.638886  115217 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 20:01:58.638935  115217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 20:01:58.654683  115217 system_svc.go:56] duration metric: took 15.783018ms WaitForService to wait for kubelet.
	I1206 20:01:58.654715  115217 kubeadm.go:581] duration metric: took 58.383258338s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1206 20:01:58.654738  115217 node_conditions.go:102] verifying NodePressure condition ...
	I1206 20:01:58.659189  115217 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1206 20:01:58.659215  115217 node_conditions.go:123] node cpu capacity is 2
	I1206 20:01:58.659226  115217 node_conditions.go:105] duration metric: took 4.482979ms to run NodePressure ...
	I1206 20:01:58.659239  115217 start.go:228] waiting for startup goroutines ...
	I1206 20:01:58.659245  115217 start.go:233] waiting for cluster config update ...
	I1206 20:01:58.659255  115217 start.go:242] writing updated cluster config ...
	I1206 20:01:58.659522  115217 ssh_runner.go:195] Run: rm -f paused
	I1206 20:01:58.710716  115217 start.go:600] kubectl: 1.28.4, cluster: 1.16.0 (minor skew: 12)
	I1206 20:01:58.713372  115217 out.go:177] 
	W1206 20:01:58.714711  115217 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.16.0.
	I1206 20:01:58.716208  115217 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1206 20:01:58.717734  115217 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-448851" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-12-06 19:55:59 UTC, ends at Wed 2023-12-06 20:10:19 UTC. --
	Dec 06 20:10:19 no-preload-989559 crio[721]: time="2023-12-06 20:10:19.268490106Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701893419268469675,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=0cb02fe5-3844-4a06-880a-00397011524b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 20:10:19 no-preload-989559 crio[721]: time="2023-12-06 20:10:19.269440220Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=611f5ba5-4c20-49f0-bc5b-fa6d6c243915 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:10:19 no-preload-989559 crio[721]: time="2023-12-06 20:10:19.269905804Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=611f5ba5-4c20-49f0-bc5b-fa6d6c243915 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:10:19 no-preload-989559 crio[721]: time="2023-12-06 20:10:19.270651619Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ec1601a49c79cb7e81473ecbb6e4506b82fc3b4942e91392653a6991579c5617,PodSandboxId:738e0ea3813b5b038dd2a87efd2e463314ae90b6ce68e5d74d84d91467982f23,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1701892643167217828,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4d98de3-12ec-47f6-a6a6-f1dc61b479be,},Annotations:map[string]string{io.kubernetes.container.hash: 92a2a5c5,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93aee471c37fc8008cd5a0ce741944f249d9cbf7a5cbb62828369850236b0f07,PodSandboxId:86500c7e690bbb411c1e6705acf9be22226888d75f882e4ae7aa0dc6481fcc6f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1701892619226302414,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-h9pkz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05501356-bf9b-4a99-a1b9-40d0caef38db,},Annotations:map[string]string{io.kubernetes.container.hash: dd425747,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2037a52e07f01097679be27c7ee8e697c886fba15f6934055f4e1af533cddb9,PodSandboxId:9bc1deb7b22d52a7ead9d48f921a87078689fa8c0d33f296602853cd62297483,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1701892616236142910,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: defaul
t,io.kubernetes.pod.uid: 73861515-9ff9-459b-888d-b551bd3eac06,},Annotations:map[string]string{io.kubernetes.container.hash: ae530940,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d07b3a050ef1964c57b85cc91a3d17646fef6d2298b91d72c007f432c51942c9,PodSandboxId:738e0ea3813b5b038dd2a87efd2e463314ae90b6ce68e5d74d84d91467982f23,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1701892612133161111,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: c4d98de3-12ec-47f6-a6a6-f1dc61b479be,},Annotations:map[string]string{io.kubernetes.container.hash: 92a2a5c5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0da9ad5d9749cba59bd11e3aa5dd332ad76b5bdd0fcde476ce333d4069f0e259,PodSandboxId:c023cca4e4bfd31ae00a2633d0a3ff041d33389bff1d668362a40abfe0eac11c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:86e0ea23640eb90027102f4e4ff188211c290910bcd9af7402fd429d43d281ff,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:10504e3918d5c118ab4ecc36cd79c1b3d37825111bb19ff9649d823c6048e208,State:CONTAINER_RUNNING,CreatedAt:1701892612043223232,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zgqvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 550b2491-c14
f-47c4-82d5-1301fa351305,},Annotations:map[string]string{io.kubernetes.container.hash: 654b931f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c00065611a1f7003ff8edd2ac629e3c6bbdfa4e1167b1dfd412ee16e5a9d3dcd,PodSandboxId:36098303ba1ede54bc911123c3f7b90ec68fd8ba635eb30a09d62f60386e03c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b953575c86991f5317a5f4b7e3f43c8a43d771ca400d96ba473f43c9a1ab3542,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9b5559bc9bb852fd4652513cc0d9e3992581e6c772e01d189a1803fce3912e0,State:CONTAINER_RUNNING,CreatedAt:1701892604592011690,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-989559,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62fd9ce7939ded9a9dc
2eebb729c4bb3,},Annotations:map[string]string{io.kubernetes.container.hash: c1576a6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7633ca5afa8aeb44e4c450285edb3b4ca09bd881a87e959894d7740313a7d861,PodSandboxId:97b0fcdcbb40446874b4d46b7b75e8f08eb61242ace9d5ec54352f79df39395f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1701892604275148113,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-989559,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 022bdbc807e59c6609983bd01c8f9099,},Annotations:map[string]string{io.kub
ernetes.container.hash: 918b4176,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43c8e91cea581258f9bf13a71487b9ffadc02ac982f7ff08fa649092e171fa87,PodSandboxId:7d58a8cccf3b81e6025acdc2b6eb79935f23e4d3a6e314b45148d4d94e66abc9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:b7918a1eaa37ee8f677a10278009aa059854630785fbb7306140641494aeec09,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:5f0b6e97e1c7566418dcae71143fdcfcc27c89c20f05f8f4a6c0a59c05bf62e5,State:CONTAINER_RUNNING,CreatedAt:1701892604238508876,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-989559,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5531a9e48939c123655068ed18719019,},Annotations
:map[string]string{io.kubernetes.container.hash: 7d9f5f80,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5b4ca951aec7e7912fa233772fa1e5a4cceffad95c297ea5b1b968ce835d2eb,PodSandboxId:5206c98e2d7ff44e06189fe64dc37da6581fce3f144756a657422248b7f20182,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c4a644510d6f168869cf0620dc09abd02a6fe054538f92a12c7fd7365ffb956,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:392ed8553c3109e2b84c9156b8908ef637d480b377a06656dc3f6c55252f0f31,State:CONTAINER_RUNNING,CreatedAt:1701892604110012504,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-989559,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9da0fc2c52dd0a0b10f62491f0029378,},Annotations:map[string
]string{io.kubernetes.container.hash: 50489c62,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=611f5ba5-4c20-49f0-bc5b-fa6d6c243915 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:10:19 no-preload-989559 crio[721]: time="2023-12-06 20:10:19.325981114Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=601f0fe3-0989-40ef-9709-d346939fe168 name=/runtime.v1.RuntimeService/Version
	Dec 06 20:10:19 no-preload-989559 crio[721]: time="2023-12-06 20:10:19.326094157Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=601f0fe3-0989-40ef-9709-d346939fe168 name=/runtime.v1.RuntimeService/Version
	Dec 06 20:10:19 no-preload-989559 crio[721]: time="2023-12-06 20:10:19.327755375Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=597ada82-f953-4328-bacb-71c8795a8805 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 20:10:19 no-preload-989559 crio[721]: time="2023-12-06 20:10:19.328224057Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701893419328205780,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=597ada82-f953-4328-bacb-71c8795a8805 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 20:10:19 no-preload-989559 crio[721]: time="2023-12-06 20:10:19.329091583Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7e152463-b3ea-4adf-a4b2-71c10add1ecb name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:10:19 no-preload-989559 crio[721]: time="2023-12-06 20:10:19.329157375Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7e152463-b3ea-4adf-a4b2-71c10add1ecb name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:10:19 no-preload-989559 crio[721]: time="2023-12-06 20:10:19.329518718Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ec1601a49c79cb7e81473ecbb6e4506b82fc3b4942e91392653a6991579c5617,PodSandboxId:738e0ea3813b5b038dd2a87efd2e463314ae90b6ce68e5d74d84d91467982f23,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1701892643167217828,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4d98de3-12ec-47f6-a6a6-f1dc61b479be,},Annotations:map[string]string{io.kubernetes.container.hash: 92a2a5c5,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93aee471c37fc8008cd5a0ce741944f249d9cbf7a5cbb62828369850236b0f07,PodSandboxId:86500c7e690bbb411c1e6705acf9be22226888d75f882e4ae7aa0dc6481fcc6f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1701892619226302414,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-h9pkz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05501356-bf9b-4a99-a1b9-40d0caef38db,},Annotations:map[string]string{io.kubernetes.container.hash: dd425747,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2037a52e07f01097679be27c7ee8e697c886fba15f6934055f4e1af533cddb9,PodSandboxId:9bc1deb7b22d52a7ead9d48f921a87078689fa8c0d33f296602853cd62297483,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1701892616236142910,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: defaul
t,io.kubernetes.pod.uid: 73861515-9ff9-459b-888d-b551bd3eac06,},Annotations:map[string]string{io.kubernetes.container.hash: ae530940,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d07b3a050ef1964c57b85cc91a3d17646fef6d2298b91d72c007f432c51942c9,PodSandboxId:738e0ea3813b5b038dd2a87efd2e463314ae90b6ce68e5d74d84d91467982f23,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1701892612133161111,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: c4d98de3-12ec-47f6-a6a6-f1dc61b479be,},Annotations:map[string]string{io.kubernetes.container.hash: 92a2a5c5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0da9ad5d9749cba59bd11e3aa5dd332ad76b5bdd0fcde476ce333d4069f0e259,PodSandboxId:c023cca4e4bfd31ae00a2633d0a3ff041d33389bff1d668362a40abfe0eac11c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:86e0ea23640eb90027102f4e4ff188211c290910bcd9af7402fd429d43d281ff,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:10504e3918d5c118ab4ecc36cd79c1b3d37825111bb19ff9649d823c6048e208,State:CONTAINER_RUNNING,CreatedAt:1701892612043223232,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zgqvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 550b2491-c14
f-47c4-82d5-1301fa351305,},Annotations:map[string]string{io.kubernetes.container.hash: 654b931f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c00065611a1f7003ff8edd2ac629e3c6bbdfa4e1167b1dfd412ee16e5a9d3dcd,PodSandboxId:36098303ba1ede54bc911123c3f7b90ec68fd8ba635eb30a09d62f60386e03c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b953575c86991f5317a5f4b7e3f43c8a43d771ca400d96ba473f43c9a1ab3542,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9b5559bc9bb852fd4652513cc0d9e3992581e6c772e01d189a1803fce3912e0,State:CONTAINER_RUNNING,CreatedAt:1701892604592011690,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-989559,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62fd9ce7939ded9a9dc
2eebb729c4bb3,},Annotations:map[string]string{io.kubernetes.container.hash: c1576a6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7633ca5afa8aeb44e4c450285edb3b4ca09bd881a87e959894d7740313a7d861,PodSandboxId:97b0fcdcbb40446874b4d46b7b75e8f08eb61242ace9d5ec54352f79df39395f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1701892604275148113,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-989559,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 022bdbc807e59c6609983bd01c8f9099,},Annotations:map[string]string{io.kub
ernetes.container.hash: 918b4176,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43c8e91cea581258f9bf13a71487b9ffadc02ac982f7ff08fa649092e171fa87,PodSandboxId:7d58a8cccf3b81e6025acdc2b6eb79935f23e4d3a6e314b45148d4d94e66abc9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:b7918a1eaa37ee8f677a10278009aa059854630785fbb7306140641494aeec09,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:5f0b6e97e1c7566418dcae71143fdcfcc27c89c20f05f8f4a6c0a59c05bf62e5,State:CONTAINER_RUNNING,CreatedAt:1701892604238508876,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-989559,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5531a9e48939c123655068ed18719019,},Annotations
:map[string]string{io.kubernetes.container.hash: 7d9f5f80,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5b4ca951aec7e7912fa233772fa1e5a4cceffad95c297ea5b1b968ce835d2eb,PodSandboxId:5206c98e2d7ff44e06189fe64dc37da6581fce3f144756a657422248b7f20182,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c4a644510d6f168869cf0620dc09abd02a6fe054538f92a12c7fd7365ffb956,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:392ed8553c3109e2b84c9156b8908ef637d480b377a06656dc3f6c55252f0f31,State:CONTAINER_RUNNING,CreatedAt:1701892604110012504,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-989559,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9da0fc2c52dd0a0b10f62491f0029378,},Annotations:map[string
]string{io.kubernetes.container.hash: 50489c62,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7e152463-b3ea-4adf-a4b2-71c10add1ecb name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:10:19 no-preload-989559 crio[721]: time="2023-12-06 20:10:19.377426644Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=5bcdcc66-4896-4bd9-960f-fd0951328087 name=/runtime.v1.RuntimeService/Version
	Dec 06 20:10:19 no-preload-989559 crio[721]: time="2023-12-06 20:10:19.377482040Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=5bcdcc66-4896-4bd9-960f-fd0951328087 name=/runtime.v1.RuntimeService/Version
	Dec 06 20:10:19 no-preload-989559 crio[721]: time="2023-12-06 20:10:19.379282486Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=b9dd88ea-03a2-4ccb-8fa8-d6519b290055 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 20:10:19 no-preload-989559 crio[721]: time="2023-12-06 20:10:19.379740608Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701893419379727101,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=b9dd88ea-03a2-4ccb-8fa8-d6519b290055 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 20:10:19 no-preload-989559 crio[721]: time="2023-12-06 20:10:19.380304904Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6cf9bc38-d2a8-462e-bde0-6383e6c5c97c name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:10:19 no-preload-989559 crio[721]: time="2023-12-06 20:10:19.380349185Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6cf9bc38-d2a8-462e-bde0-6383e6c5c97c name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:10:19 no-preload-989559 crio[721]: time="2023-12-06 20:10:19.380526119Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ec1601a49c79cb7e81473ecbb6e4506b82fc3b4942e91392653a6991579c5617,PodSandboxId:738e0ea3813b5b038dd2a87efd2e463314ae90b6ce68e5d74d84d91467982f23,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1701892643167217828,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4d98de3-12ec-47f6-a6a6-f1dc61b479be,},Annotations:map[string]string{io.kubernetes.container.hash: 92a2a5c5,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93aee471c37fc8008cd5a0ce741944f249d9cbf7a5cbb62828369850236b0f07,PodSandboxId:86500c7e690bbb411c1e6705acf9be22226888d75f882e4ae7aa0dc6481fcc6f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1701892619226302414,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-h9pkz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05501356-bf9b-4a99-a1b9-40d0caef38db,},Annotations:map[string]string{io.kubernetes.container.hash: dd425747,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2037a52e07f01097679be27c7ee8e697c886fba15f6934055f4e1af533cddb9,PodSandboxId:9bc1deb7b22d52a7ead9d48f921a87078689fa8c0d33f296602853cd62297483,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1701892616236142910,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: defaul
t,io.kubernetes.pod.uid: 73861515-9ff9-459b-888d-b551bd3eac06,},Annotations:map[string]string{io.kubernetes.container.hash: ae530940,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d07b3a050ef1964c57b85cc91a3d17646fef6d2298b91d72c007f432c51942c9,PodSandboxId:738e0ea3813b5b038dd2a87efd2e463314ae90b6ce68e5d74d84d91467982f23,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1701892612133161111,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: c4d98de3-12ec-47f6-a6a6-f1dc61b479be,},Annotations:map[string]string{io.kubernetes.container.hash: 92a2a5c5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0da9ad5d9749cba59bd11e3aa5dd332ad76b5bdd0fcde476ce333d4069f0e259,PodSandboxId:c023cca4e4bfd31ae00a2633d0a3ff041d33389bff1d668362a40abfe0eac11c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:86e0ea23640eb90027102f4e4ff188211c290910bcd9af7402fd429d43d281ff,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:10504e3918d5c118ab4ecc36cd79c1b3d37825111bb19ff9649d823c6048e208,State:CONTAINER_RUNNING,CreatedAt:1701892612043223232,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zgqvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 550b2491-c14
f-47c4-82d5-1301fa351305,},Annotations:map[string]string{io.kubernetes.container.hash: 654b931f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c00065611a1f7003ff8edd2ac629e3c6bbdfa4e1167b1dfd412ee16e5a9d3dcd,PodSandboxId:36098303ba1ede54bc911123c3f7b90ec68fd8ba635eb30a09d62f60386e03c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b953575c86991f5317a5f4b7e3f43c8a43d771ca400d96ba473f43c9a1ab3542,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9b5559bc9bb852fd4652513cc0d9e3992581e6c772e01d189a1803fce3912e0,State:CONTAINER_RUNNING,CreatedAt:1701892604592011690,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-989559,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62fd9ce7939ded9a9dc
2eebb729c4bb3,},Annotations:map[string]string{io.kubernetes.container.hash: c1576a6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7633ca5afa8aeb44e4c450285edb3b4ca09bd881a87e959894d7740313a7d861,PodSandboxId:97b0fcdcbb40446874b4d46b7b75e8f08eb61242ace9d5ec54352f79df39395f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1701892604275148113,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-989559,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 022bdbc807e59c6609983bd01c8f9099,},Annotations:map[string]string{io.kub
ernetes.container.hash: 918b4176,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43c8e91cea581258f9bf13a71487b9ffadc02ac982f7ff08fa649092e171fa87,PodSandboxId:7d58a8cccf3b81e6025acdc2b6eb79935f23e4d3a6e314b45148d4d94e66abc9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:b7918a1eaa37ee8f677a10278009aa059854630785fbb7306140641494aeec09,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:5f0b6e97e1c7566418dcae71143fdcfcc27c89c20f05f8f4a6c0a59c05bf62e5,State:CONTAINER_RUNNING,CreatedAt:1701892604238508876,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-989559,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5531a9e48939c123655068ed18719019,},Annotations
:map[string]string{io.kubernetes.container.hash: 7d9f5f80,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5b4ca951aec7e7912fa233772fa1e5a4cceffad95c297ea5b1b968ce835d2eb,PodSandboxId:5206c98e2d7ff44e06189fe64dc37da6581fce3f144756a657422248b7f20182,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c4a644510d6f168869cf0620dc09abd02a6fe054538f92a12c7fd7365ffb956,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:392ed8553c3109e2b84c9156b8908ef637d480b377a06656dc3f6c55252f0f31,State:CONTAINER_RUNNING,CreatedAt:1701892604110012504,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-989559,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9da0fc2c52dd0a0b10f62491f0029378,},Annotations:map[string
]string{io.kubernetes.container.hash: 50489c62,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6cf9bc38-d2a8-462e-bde0-6383e6c5c97c name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:10:19 no-preload-989559 crio[721]: time="2023-12-06 20:10:19.417313975Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=002f461e-070c-48e1-aad6-380e7cd15b8e name=/runtime.v1.RuntimeService/Version
	Dec 06 20:10:19 no-preload-989559 crio[721]: time="2023-12-06 20:10:19.417397378Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=002f461e-070c-48e1-aad6-380e7cd15b8e name=/runtime.v1.RuntimeService/Version
	Dec 06 20:10:19 no-preload-989559 crio[721]: time="2023-12-06 20:10:19.420013952Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=51e84a14-700b-497e-a34f-3182ab44c37f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 20:10:19 no-preload-989559 crio[721]: time="2023-12-06 20:10:19.420573548Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701893419420557028,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=51e84a14-700b-497e-a34f-3182ab44c37f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 20:10:19 no-preload-989559 crio[721]: time="2023-12-06 20:10:19.421919198Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=70bb37b4-c61b-4ea5-83a8-e32d943adef9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:10:19 no-preload-989559 crio[721]: time="2023-12-06 20:10:19.421994607Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=70bb37b4-c61b-4ea5-83a8-e32d943adef9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:10:19 no-preload-989559 crio[721]: time="2023-12-06 20:10:19.422201645Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ec1601a49c79cb7e81473ecbb6e4506b82fc3b4942e91392653a6991579c5617,PodSandboxId:738e0ea3813b5b038dd2a87efd2e463314ae90b6ce68e5d74d84d91467982f23,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1701892643167217828,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4d98de3-12ec-47f6-a6a6-f1dc61b479be,},Annotations:map[string]string{io.kubernetes.container.hash: 92a2a5c5,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93aee471c37fc8008cd5a0ce741944f249d9cbf7a5cbb62828369850236b0f07,PodSandboxId:86500c7e690bbb411c1e6705acf9be22226888d75f882e4ae7aa0dc6481fcc6f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1701892619226302414,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-h9pkz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05501356-bf9b-4a99-a1b9-40d0caef38db,},Annotations:map[string]string{io.kubernetes.container.hash: dd425747,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2037a52e07f01097679be27c7ee8e697c886fba15f6934055f4e1af533cddb9,PodSandboxId:9bc1deb7b22d52a7ead9d48f921a87078689fa8c0d33f296602853cd62297483,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1701892616236142910,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: defaul
t,io.kubernetes.pod.uid: 73861515-9ff9-459b-888d-b551bd3eac06,},Annotations:map[string]string{io.kubernetes.container.hash: ae530940,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d07b3a050ef1964c57b85cc91a3d17646fef6d2298b91d72c007f432c51942c9,PodSandboxId:738e0ea3813b5b038dd2a87efd2e463314ae90b6ce68e5d74d84d91467982f23,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1701892612133161111,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: c4d98de3-12ec-47f6-a6a6-f1dc61b479be,},Annotations:map[string]string{io.kubernetes.container.hash: 92a2a5c5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0da9ad5d9749cba59bd11e3aa5dd332ad76b5bdd0fcde476ce333d4069f0e259,PodSandboxId:c023cca4e4bfd31ae00a2633d0a3ff041d33389bff1d668362a40abfe0eac11c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:86e0ea23640eb90027102f4e4ff188211c290910bcd9af7402fd429d43d281ff,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:10504e3918d5c118ab4ecc36cd79c1b3d37825111bb19ff9649d823c6048e208,State:CONTAINER_RUNNING,CreatedAt:1701892612043223232,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zgqvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 550b2491-c14
f-47c4-82d5-1301fa351305,},Annotations:map[string]string{io.kubernetes.container.hash: 654b931f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c00065611a1f7003ff8edd2ac629e3c6bbdfa4e1167b1dfd412ee16e5a9d3dcd,PodSandboxId:36098303ba1ede54bc911123c3f7b90ec68fd8ba635eb30a09d62f60386e03c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b953575c86991f5317a5f4b7e3f43c8a43d771ca400d96ba473f43c9a1ab3542,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9b5559bc9bb852fd4652513cc0d9e3992581e6c772e01d189a1803fce3912e0,State:CONTAINER_RUNNING,CreatedAt:1701892604592011690,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-989559,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62fd9ce7939ded9a9dc
2eebb729c4bb3,},Annotations:map[string]string{io.kubernetes.container.hash: c1576a6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7633ca5afa8aeb44e4c450285edb3b4ca09bd881a87e959894d7740313a7d861,PodSandboxId:97b0fcdcbb40446874b4d46b7b75e8f08eb61242ace9d5ec54352f79df39395f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1701892604275148113,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-989559,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 022bdbc807e59c6609983bd01c8f9099,},Annotations:map[string]string{io.kub
ernetes.container.hash: 918b4176,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43c8e91cea581258f9bf13a71487b9ffadc02ac982f7ff08fa649092e171fa87,PodSandboxId:7d58a8cccf3b81e6025acdc2b6eb79935f23e4d3a6e314b45148d4d94e66abc9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:b7918a1eaa37ee8f677a10278009aa059854630785fbb7306140641494aeec09,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:5f0b6e97e1c7566418dcae71143fdcfcc27c89c20f05f8f4a6c0a59c05bf62e5,State:CONTAINER_RUNNING,CreatedAt:1701892604238508876,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-989559,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5531a9e48939c123655068ed18719019,},Annotations
:map[string]string{io.kubernetes.container.hash: 7d9f5f80,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5b4ca951aec7e7912fa233772fa1e5a4cceffad95c297ea5b1b968ce835d2eb,PodSandboxId:5206c98e2d7ff44e06189fe64dc37da6581fce3f144756a657422248b7f20182,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c4a644510d6f168869cf0620dc09abd02a6fe054538f92a12c7fd7365ffb956,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:392ed8553c3109e2b84c9156b8908ef637d480b377a06656dc3f6c55252f0f31,State:CONTAINER_RUNNING,CreatedAt:1701892604110012504,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-989559,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9da0fc2c52dd0a0b10f62491f0029378,},Annotations:map[string
]string{io.kubernetes.container.hash: 50489c62,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=70bb37b4-c61b-4ea5-83a8-e32d943adef9 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ec1601a49c79c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   738e0ea3813b5       storage-provisioner
	93aee471c37fc       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   86500c7e690bb       coredns-76f75df574-h9pkz
	e2037a52e07f0       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   9bc1deb7b22d5       busybox
	d07b3a050ef19       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   738e0ea3813b5       storage-provisioner
	0da9ad5d9749c       86e0ea23640eb90027102f4e4ff188211c290910bcd9af7402fd429d43d281ff                                      13 minutes ago      Running             kube-proxy                1                   c023cca4e4bfd       kube-proxy-zgqvt
	c00065611a1f7       b953575c86991f5317a5f4b7e3f43c8a43d771ca400d96ba473f43c9a1ab3542                                      13 minutes ago      Running             kube-scheduler            1                   36098303ba1ed       kube-scheduler-no-preload-989559
	7633ca5afa8ae       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7                                      13 minutes ago      Running             etcd                      1                   97b0fcdcbb404       etcd-no-preload-989559
	43c8e91cea581       b7918a1eaa37ee8f677a10278009aa059854630785fbb7306140641494aeec09                                      13 minutes ago      Running             kube-controller-manager   1                   7d58a8cccf3b8       kube-controller-manager-no-preload-989559
	f5b4ca951aec7       5c4a644510d6f168869cf0620dc09abd02a6fe054538f92a12c7fd7365ffb956                                      13 minutes ago      Running             kube-apiserver            1                   5206c98e2d7ff       kube-apiserver-no-preload-989559
	
	* 
	* ==> coredns [93aee471c37fc8008cd5a0ce741944f249d9cbf7a5cbb62828369850236b0f07] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:42717 - 22482 "HINFO IN 3959492625878978717.4147345806210626056. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.027901011s
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-989559
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-989559
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=31a3600ce72029d920a55140bbc6d0705e357503
	                    minikube.k8s.io/name=no-preload-989559
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_06T19_47_06_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 06 Dec 2023 19:47:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-989559
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 06 Dec 2023 20:10:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 06 Dec 2023 20:07:32 +0000   Wed, 06 Dec 2023 19:47:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 06 Dec 2023 20:07:32 +0000   Wed, 06 Dec 2023 19:47:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 06 Dec 2023 20:07:32 +0000   Wed, 06 Dec 2023 19:47:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 06 Dec 2023 20:07:32 +0000   Wed, 06 Dec 2023 19:57:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.5
	  Hostname:    no-preload-989559
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 f68799b222e5492590de8f6722e893a0
	  System UUID:                f68799b2-22e5-4925-90de-8f6722e893a0
	  Boot ID:                    ea5532e0-30f2-4abf-a496-684a2ba5aa4c
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.29.0-rc.1
	  Kube-Proxy Version:         v1.29.0-rc.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 coredns-76f75df574-h9pkz                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     23m
	  kube-system                 etcd-no-preload-989559                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         23m
	  kube-system                 kube-apiserver-no-preload-989559             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-controller-manager-no-preload-989559    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-proxy-zgqvt                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-scheduler-no-preload-989559             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 metrics-server-57f55c9bc5-vz7qc              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         22m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 22m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     23m (x7 over 23m)  kubelet          Node no-preload-989559 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    23m (x8 over 23m)  kubelet          Node no-preload-989559 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  23m (x8 over 23m)  kubelet          Node no-preload-989559 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 23m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23m                kubelet          Node no-preload-989559 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m                kubelet          Node no-preload-989559 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m                kubelet          Node no-preload-989559 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                23m                kubelet          Node no-preload-989559 status is now: NodeReady
	  Normal  RegisteredNode           23m                node-controller  Node no-preload-989559 event: Registered Node no-preload-989559 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node no-preload-989559 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node no-preload-989559 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node no-preload-989559 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-989559 event: Registered Node no-preload-989559 in Controller
	
	* 
	* ==> dmesg <==
	* [Dec 6 19:55] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000002] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.076119] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.953767] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.569585] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.165097] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[Dec 6 19:56] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000015] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.457146] systemd-fstab-generator[646]: Ignoring "noauto" for root device
	[  +0.157919] systemd-fstab-generator[657]: Ignoring "noauto" for root device
	[  +0.165544] systemd-fstab-generator[670]: Ignoring "noauto" for root device
	[  +0.112495] systemd-fstab-generator[681]: Ignoring "noauto" for root device
	[  +0.240262] systemd-fstab-generator[705]: Ignoring "noauto" for root device
	[ +30.371066] systemd-fstab-generator[1341]: Ignoring "noauto" for root device
	[ +16.093619] kauditd_printk_skb: 24 callbacks suppressed
	
	* 
	* ==> etcd [7633ca5afa8aeb44e4c450285edb3b4ca09bd881a87e959894d7740313a7d861] <==
	* {"level":"info","ts":"2023-12-06T19:56:46.719512Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"436188ec3031a10e","local-member-id":"c5263387c79c0223","added-peer-id":"c5263387c79c0223","added-peer-peer-urls":["https://192.168.39.5:2380"]}
	{"level":"info","ts":"2023-12-06T19:56:46.719801Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"436188ec3031a10e","local-member-id":"c5263387c79c0223","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-06T19:56:46.719855Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-06T19:56:46.737735Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-12-06T19:56:46.737848Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.5:2380"}
	{"level":"info","ts":"2023-12-06T19:56:46.737988Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.5:2380"}
	{"level":"info","ts":"2023-12-06T19:56:46.743875Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"c5263387c79c0223","initial-advertise-peer-urls":["https://192.168.39.5:2380"],"listen-peer-urls":["https://192.168.39.5:2380"],"advertise-client-urls":["https://192.168.39.5:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.5:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-12-06T19:56:46.743941Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-12-06T19:56:48.369716Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c5263387c79c0223 is starting a new election at term 2"}
	{"level":"info","ts":"2023-12-06T19:56:48.369845Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c5263387c79c0223 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-12-06T19:56:48.369893Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c5263387c79c0223 received MsgPreVoteResp from c5263387c79c0223 at term 2"}
	{"level":"info","ts":"2023-12-06T19:56:48.369957Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c5263387c79c0223 became candidate at term 3"}
	{"level":"info","ts":"2023-12-06T19:56:48.369982Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c5263387c79c0223 received MsgVoteResp from c5263387c79c0223 at term 3"}
	{"level":"info","ts":"2023-12-06T19:56:48.370018Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c5263387c79c0223 became leader at term 3"}
	{"level":"info","ts":"2023-12-06T19:56:48.370054Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c5263387c79c0223 elected leader c5263387c79c0223 at term 3"}
	{"level":"info","ts":"2023-12-06T19:56:48.371887Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"c5263387c79c0223","local-member-attributes":"{Name:no-preload-989559 ClientURLs:[https://192.168.39.5:2379]}","request-path":"/0/members/c5263387c79c0223/attributes","cluster-id":"436188ec3031a10e","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-06T19:56:48.371959Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-06T19:56:48.371905Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-06T19:56:48.373011Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-06T19:56:48.373066Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-06T19:56:48.375778Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-06T19:56:48.375824Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.5:2379"}
	{"level":"info","ts":"2023-12-06T20:06:48.410047Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":827}
	{"level":"info","ts":"2023-12-06T20:06:48.41337Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":827,"took":"2.500425ms","hash":625595744}
	{"level":"info","ts":"2023-12-06T20:06:48.413507Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":625595744,"revision":827,"compact-revision":-1}
	
	* 
	* ==> kernel <==
	*  20:10:19 up 14 min,  0 users,  load average: 0.21, 0.22, 0.14
	Linux no-preload-989559 5.10.57 #1 SMP Fri Dec 1 04:24:04 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [f5b4ca951aec7e7912fa233772fa1e5a4cceffad95c297ea5b1b968ce835d2eb] <==
	* I1206 20:04:50.845897       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1206 20:06:49.846030       1 handler_proxy.go:93] no RequestInfo found in the context
	E1206 20:06:49.846160       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W1206 20:06:50.847012       1 handler_proxy.go:93] no RequestInfo found in the context
	E1206 20:06:50.847063       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1206 20:06:50.847072       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1206 20:06:50.847112       1 handler_proxy.go:93] no RequestInfo found in the context
	E1206 20:06:50.847155       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1206 20:06:50.848533       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1206 20:07:50.847969       1 handler_proxy.go:93] no RequestInfo found in the context
	E1206 20:07:50.848146       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1206 20:07:50.848205       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1206 20:07:50.849310       1 handler_proxy.go:93] no RequestInfo found in the context
	E1206 20:07:50.849388       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1206 20:07:50.849417       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1206 20:09:50.848655       1 handler_proxy.go:93] no RequestInfo found in the context
	E1206 20:09:50.848712       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1206 20:09:50.848720       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1206 20:09:50.849917       1 handler_proxy.go:93] no RequestInfo found in the context
	E1206 20:09:50.850077       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1206 20:09:50.850161       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [43c8e91cea581258f9bf13a71487b9ffadc02ac982f7ff08fa649092e171fa87] <==
	* I1206 20:04:33.098893       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1206 20:05:02.632065       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1206 20:05:03.110159       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1206 20:05:32.636966       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1206 20:05:33.118332       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1206 20:06:02.645063       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1206 20:06:03.128270       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1206 20:06:32.650574       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1206 20:06:33.137911       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1206 20:07:02.657308       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1206 20:07:03.145896       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1206 20:07:32.663035       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1206 20:07:33.156667       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1206 20:07:58.968856       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="348.567µs"
	E1206 20:08:02.669398       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1206 20:08:03.164350       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1206 20:08:12.960711       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="241.341µs"
	E1206 20:08:32.674649       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1206 20:08:33.173003       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1206 20:09:02.681735       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1206 20:09:03.183458       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1206 20:09:32.687840       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1206 20:09:33.191570       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1206 20:10:02.694165       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1206 20:10:03.200436       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [0da9ad5d9749cba59bd11e3aa5dd332ad76b5bdd0fcde476ce333d4069f0e259] <==
	* I1206 19:56:52.350661       1 server_others.go:72] "Using iptables proxy"
	I1206 19:56:52.360128       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.5"]
	I1206 19:56:52.415739       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I1206 19:56:52.415789       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1206 19:56:52.415802       1 server_others.go:168] "Using iptables Proxier"
	I1206 19:56:52.419388       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1206 19:56:52.419910       1 server.go:865] "Version info" version="v1.29.0-rc.1"
	I1206 19:56:52.420033       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 19:56:52.425133       1 config.go:188] "Starting service config controller"
	I1206 19:56:52.425182       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1206 19:56:52.425204       1 config.go:97] "Starting endpoint slice config controller"
	I1206 19:56:52.425242       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1206 19:56:52.429544       1 config.go:315] "Starting node config controller"
	I1206 19:56:52.429723       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1206 19:56:52.525467       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1206 19:56:52.525540       1 shared_informer.go:318] Caches are synced for service config
	I1206 19:56:52.529951       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [c00065611a1f7003ff8edd2ac629e3c6bbdfa4e1167b1dfd412ee16e5a9d3dcd] <==
	* I1206 19:56:47.584232       1 serving.go:380] Generated self-signed cert in-memory
	W1206 19:56:49.783146       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1206 19:56:49.783393       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1206 19:56:49.783410       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1206 19:56:49.783416       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1206 19:56:49.883249       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.0-rc.1"
	I1206 19:56:49.883302       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 19:56:49.886556       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1206 19:56:49.890002       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 19:56:49.890072       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1206 19:56:49.890416       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1206 19:56:49.991022       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-12-06 19:55:59 UTC, ends at Wed 2023-12-06 20:10:20 UTC. --
	Dec 06 20:07:42 no-preload-989559 kubelet[1347]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 06 20:07:42 no-preload-989559 kubelet[1347]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 06 20:07:47 no-preload-989559 kubelet[1347]: E1206 20:07:47.017296    1347 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 06 20:07:47 no-preload-989559 kubelet[1347]: E1206 20:07:47.017426    1347 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 06 20:07:47 no-preload-989559 kubelet[1347]: E1206 20:07:47.017774    1347 kuberuntime_manager.go:1262] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-knhrz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pro
beHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:Fi
le,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-vz7qc_kube-system(97c1bcd2-eabc-4029-bb02-5bbfd4d96c0f): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Dec 06 20:07:47 no-preload-989559 kubelet[1347]: E1206 20:07:47.017824    1347 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-vz7qc" podUID="97c1bcd2-eabc-4029-bb02-5bbfd4d96c0f"
	Dec 06 20:07:58 no-preload-989559 kubelet[1347]: E1206 20:07:58.945016    1347 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vz7qc" podUID="97c1bcd2-eabc-4029-bb02-5bbfd4d96c0f"
	Dec 06 20:08:12 no-preload-989559 kubelet[1347]: E1206 20:08:12.945092    1347 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vz7qc" podUID="97c1bcd2-eabc-4029-bb02-5bbfd4d96c0f"
	Dec 06 20:08:25 no-preload-989559 kubelet[1347]: E1206 20:08:25.944893    1347 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vz7qc" podUID="97c1bcd2-eabc-4029-bb02-5bbfd4d96c0f"
	Dec 06 20:08:39 no-preload-989559 kubelet[1347]: E1206 20:08:39.944815    1347 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vz7qc" podUID="97c1bcd2-eabc-4029-bb02-5bbfd4d96c0f"
	Dec 06 20:08:42 no-preload-989559 kubelet[1347]: E1206 20:08:42.983160    1347 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 06 20:08:42 no-preload-989559 kubelet[1347]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 06 20:08:42 no-preload-989559 kubelet[1347]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 06 20:08:42 no-preload-989559 kubelet[1347]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 06 20:08:53 no-preload-989559 kubelet[1347]: E1206 20:08:53.944908    1347 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vz7qc" podUID="97c1bcd2-eabc-4029-bb02-5bbfd4d96c0f"
	Dec 06 20:09:05 no-preload-989559 kubelet[1347]: E1206 20:09:05.945189    1347 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vz7qc" podUID="97c1bcd2-eabc-4029-bb02-5bbfd4d96c0f"
	Dec 06 20:09:16 no-preload-989559 kubelet[1347]: E1206 20:09:16.944525    1347 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vz7qc" podUID="97c1bcd2-eabc-4029-bb02-5bbfd4d96c0f"
	Dec 06 20:09:30 no-preload-989559 kubelet[1347]: E1206 20:09:30.944982    1347 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vz7qc" podUID="97c1bcd2-eabc-4029-bb02-5bbfd4d96c0f"
	Dec 06 20:09:42 no-preload-989559 kubelet[1347]: E1206 20:09:42.979899    1347 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 06 20:09:42 no-preload-989559 kubelet[1347]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 06 20:09:42 no-preload-989559 kubelet[1347]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 06 20:09:42 no-preload-989559 kubelet[1347]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 06 20:09:45 no-preload-989559 kubelet[1347]: E1206 20:09:45.944790    1347 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vz7qc" podUID="97c1bcd2-eabc-4029-bb02-5bbfd4d96c0f"
	Dec 06 20:09:59 no-preload-989559 kubelet[1347]: E1206 20:09:59.944106    1347 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vz7qc" podUID="97c1bcd2-eabc-4029-bb02-5bbfd4d96c0f"
	Dec 06 20:10:13 no-preload-989559 kubelet[1347]: E1206 20:10:13.945255    1347 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vz7qc" podUID="97c1bcd2-eabc-4029-bb02-5bbfd4d96c0f"
	
	* 
	* ==> storage-provisioner [d07b3a050ef1964c57b85cc91a3d17646fef6d2298b91d72c007f432c51942c9] <==
	* I1206 19:56:52.347280       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1206 19:57:22.350162       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	* 
	* ==> storage-provisioner [ec1601a49c79cb7e81473ecbb6e4506b82fc3b4942e91392653a6991579c5617] <==
	* I1206 19:57:23.295534       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1206 19:57:23.307526       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1206 19:57:23.307782       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1206 19:57:23.319318       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1206 19:57:23.319538       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-989559_33c51682-fe10-45ce-b932-59ec894aaf43!
	I1206 19:57:23.319389       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fe0beb93-637f-469a-88e2-6358f219c300", APIVersion:"v1", ResourceVersion:"594", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-989559_33c51682-fe10-45ce-b932-59ec894aaf43 became leader
	I1206 19:57:23.420399       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-989559_33c51682-fe10-45ce-b932-59ec894aaf43!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-989559 -n no-preload-989559
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-989559 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-vz7qc
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-989559 describe pod metrics-server-57f55c9bc5-vz7qc
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-989559 describe pod metrics-server-57f55c9bc5-vz7qc: exit status 1 (68.329383ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-vz7qc" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-989559 describe pod metrics-server-57f55c9bc5-vz7qc: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (543.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1206 20:02:54.631492   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/functional-317483/client.crt: no such file or directory
E1206 20:03:02.203749   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/kindnet-459609/client.crt: no such file or directory
E1206 20:03:08.166619   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/auto-459609/client.crt: no such file or directory
E1206 20:03:22.657292   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/addons-463584/client.crt: no such file or directory
E1206 20:03:58.794393   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/calico-459609/client.crt: no such file or directory
E1206 20:04:17.681903   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/functional-317483/client.crt: no such file or directory
E1206 20:04:25.250176   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/kindnet-459609/client.crt: no such file or directory
E1206 20:04:31.211950   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/auto-459609/client.crt: no such file or directory
E1206 20:04:34.367394   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/custom-flannel-459609/client.crt: no such file or directory
E1206 20:04:49.042501   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/enable-default-cni-459609/client.crt: no such file or directory
E1206 20:05:21.840569   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/calico-459609/client.crt: no such file or directory
E1206 20:05:42.080596   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/flannel-459609/client.crt: no such file or directory
E1206 20:05:51.525216   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/client.crt: no such file or directory
E1206 20:05:57.413961   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/custom-flannel-459609/client.crt: no such file or directory
E1206 20:06:12.087784   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/enable-default-cni-459609/client.crt: no such file or directory
E1206 20:06:27.860032   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/bridge-459609/client.crt: no such file or directory
E1206 20:07:05.124259   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/flannel-459609/client.crt: no such file or directory
E1206 20:07:14.574223   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/client.crt: no such file or directory
E1206 20:07:50.905830   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/bridge-459609/client.crt: no such file or directory
E1206 20:07:54.631300   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/functional-317483/client.crt: no such file or directory
E1206 20:08:02.203721   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/kindnet-459609/client.crt: no such file or directory
E1206 20:08:08.167584   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/auto-459609/client.crt: no such file or directory
E1206 20:08:22.657814   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/addons-463584/client.crt: no such file or directory
E1206 20:08:58.794291   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/calico-459609/client.crt: no such file or directory
E1206 20:09:34.367807   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/custom-flannel-459609/client.crt: no such file or directory
E1206 20:09:49.041976   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/enable-default-cni-459609/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-448851 -n old-k8s-version-448851
start_stop_delete_test.go:274: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-12-06 20:10:59.314070848 +0000 UTC m=+5433.398564665
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-448851 -n old-k8s-version-448851
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-448851 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-448851 logs -n 25: (1.708154207s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-459609 sudo cat                              | bridge-459609                | jenkins | v1.32.0 | 06 Dec 23 19:46 UTC | 06 Dec 23 19:46 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-459609 sudo                                  | bridge-459609                | jenkins | v1.32.0 | 06 Dec 23 19:46 UTC | 06 Dec 23 19:46 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-459609 sudo                                  | bridge-459609                | jenkins | v1.32.0 | 06 Dec 23 19:46 UTC | 06 Dec 23 19:46 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-459609 sudo                                  | bridge-459609                | jenkins | v1.32.0 | 06 Dec 23 19:46 UTC | 06 Dec 23 19:46 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-459609 sudo find                             | bridge-459609                | jenkins | v1.32.0 | 06 Dec 23 19:46 UTC | 06 Dec 23 19:46 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-459609 sudo crio                             | bridge-459609                | jenkins | v1.32.0 | 06 Dec 23 19:46 UTC | 06 Dec 23 19:46 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-459609                                       | bridge-459609                | jenkins | v1.32.0 | 06 Dec 23 19:46 UTC | 06 Dec 23 19:46 UTC |
	| delete  | -p                                                     | disable-driver-mounts-730405 | jenkins | v1.32.0 | 06 Dec 23 19:46 UTC | 06 Dec 23 19:46 UTC |
	|         | disable-driver-mounts-730405                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-380424 | jenkins | v1.32.0 | 06 Dec 23 19:46 UTC | 06 Dec 23 19:48 UTC |
	|         | default-k8s-diff-port-380424                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-989559             | no-preload-989559            | jenkins | v1.32.0 | 06 Dec 23 19:47 UTC | 06 Dec 23 19:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-989559                                   | no-preload-989559            | jenkins | v1.32.0 | 06 Dec 23 19:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-448851        | old-k8s-version-448851       | jenkins | v1.32.0 | 06 Dec 23 19:47 UTC | 06 Dec 23 19:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-448851                              | old-k8s-version-448851       | jenkins | v1.32.0 | 06 Dec 23 19:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-380424  | default-k8s-diff-port-380424 | jenkins | v1.32.0 | 06 Dec 23 19:48 UTC | 06 Dec 23 19:48 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-380424 | jenkins | v1.32.0 | 06 Dec 23 19:48 UTC |                     |
	|         | default-k8s-diff-port-380424                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-209025            | embed-certs-209025           | jenkins | v1.32.0 | 06 Dec 23 19:48 UTC | 06 Dec 23 19:48 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-209025                                  | embed-certs-209025           | jenkins | v1.32.0 | 06 Dec 23 19:48 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-989559                  | no-preload-989559            | jenkins | v1.32.0 | 06 Dec 23 19:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-989559                                   | no-preload-989559            | jenkins | v1.32.0 | 06 Dec 23 19:50 UTC | 06 Dec 23 20:01 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.1                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-448851             | old-k8s-version-448851       | jenkins | v1.32.0 | 06 Dec 23 19:50 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-448851                              | old-k8s-version-448851       | jenkins | v1.32.0 | 06 Dec 23 19:50 UTC | 06 Dec 23 20:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-380424       | default-k8s-diff-port-380424 | jenkins | v1.32.0 | 06 Dec 23 19:50 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-209025                 | embed-certs-209025           | jenkins | v1.32.0 | 06 Dec 23 19:50 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-380424 | jenkins | v1.32.0 | 06 Dec 23 19:50 UTC | 06 Dec 23 20:00 UTC |
	|         | default-k8s-diff-port-380424                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-209025                                  | embed-certs-209025           | jenkins | v1.32.0 | 06 Dec 23 19:50 UTC | 06 Dec 23 20:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/06 19:50:49
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 19:50:49.512923  115591 out.go:296] Setting OutFile to fd 1 ...
	I1206 19:50:49.513070  115591 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 19:50:49.513079  115591 out.go:309] Setting ErrFile to fd 2...
	I1206 19:50:49.513084  115591 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 19:50:49.513305  115591 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17740-63652/.minikube/bin
	I1206 19:50:49.513900  115591 out.go:303] Setting JSON to false
	I1206 19:50:49.514822  115591 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":9200,"bootTime":1701883050,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 19:50:49.514886  115591 start.go:138] virtualization: kvm guest
	I1206 19:50:49.517831  115591 out.go:177] * [embed-certs-209025] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1206 19:50:49.519496  115591 notify.go:220] Checking for updates...
	I1206 19:50:49.519507  115591 out.go:177]   - MINIKUBE_LOCATION=17740
	I1206 19:50:49.521356  115591 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 19:50:49.523241  115591 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17740-63652/kubeconfig
	I1206 19:50:49.525016  115591 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17740-63652/.minikube
	I1206 19:50:49.526632  115591 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 19:50:49.528148  115591 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 19:50:49.530159  115591 config.go:182] Loaded profile config "embed-certs-209025": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 19:50:49.530586  115591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:50:49.530636  115591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:50:49.545128  115591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46579
	I1206 19:50:49.545881  115591 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:50:49.547345  115591 main.go:141] libmachine: Using API Version  1
	I1206 19:50:49.547375  115591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:50:49.547739  115591 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:50:49.547926  115591 main.go:141] libmachine: (embed-certs-209025) Calling .DriverName
	I1206 19:50:49.548144  115591 driver.go:392] Setting default libvirt URI to qemu:///system
	I1206 19:50:49.548458  115591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:50:49.548506  115591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:50:49.562767  115591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42919
	I1206 19:50:49.563139  115591 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:50:49.563567  115591 main.go:141] libmachine: Using API Version  1
	I1206 19:50:49.563588  115591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:50:49.563913  115591 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:50:49.564112  115591 main.go:141] libmachine: (embed-certs-209025) Calling .DriverName
	I1206 19:50:49.600267  115591 out.go:177] * Using the kvm2 driver based on existing profile
	I1206 19:50:49.601977  115591 start.go:298] selected driver: kvm2
	I1206 19:50:49.601996  115591 start.go:902] validating driver "kvm2" against &{Name:embed-certs-209025 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-209025 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.164 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 19:50:49.602089  115591 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 19:50:49.602812  115591 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 19:50:49.602891  115591 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17740-63652/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1206 19:50:49.617831  115591 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1206 19:50:49.618234  115591 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 19:50:49.618296  115591 cni.go:84] Creating CNI manager for ""
	I1206 19:50:49.618306  115591 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 19:50:49.618316  115591 start_flags.go:323] config:
	{Name:embed-certs-209025 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-209025 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.164 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikub
e-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 19:50:49.618468  115591 iso.go:125] acquiring lock: {Name:mk6e9c7dc90243dab7d2a6f322b4b6abe4dff6ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 19:50:49.620428  115591 out.go:177] * Starting control plane node embed-certs-209025 in cluster embed-certs-209025
	I1206 19:50:46.558601  115497 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1206 19:50:46.558636  115497 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1206 19:50:46.558644  115497 cache.go:56] Caching tarball of preloaded images
	I1206 19:50:46.558714  115497 preload.go:174] Found /home/jenkins/minikube-integration/17740-63652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1206 19:50:46.558724  115497 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1206 19:50:46.558837  115497 profile.go:148] Saving config to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/default-k8s-diff-port-380424/config.json ...
	I1206 19:50:46.559024  115497 start.go:365] acquiring machines lock for default-k8s-diff-port-380424: {Name:mk49ce640266d8c664a871ed4989f65c26b6fae1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1206 19:50:49.622242  115591 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1206 19:50:49.622298  115591 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1206 19:50:49.622320  115591 cache.go:56] Caching tarball of preloaded images
	I1206 19:50:49.622419  115591 preload.go:174] Found /home/jenkins/minikube-integration/17740-63652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1206 19:50:49.622431  115591 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1206 19:50:49.622525  115591 profile.go:148] Saving config to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/embed-certs-209025/config.json ...
	I1206 19:50:49.622798  115591 start.go:365] acquiring machines lock for embed-certs-209025: {Name:mk49ce640266d8c664a871ed4989f65c26b6fae1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1206 19:50:51.693503  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:50:54.765519  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:51:00.845535  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:51:03.917509  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:51:09.997591  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:51:13.069427  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:51:19.149482  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:51:22.221565  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:51:28.301531  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:51:31.373569  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:51:37.453523  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:51:40.525531  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:51:46.605538  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:51:49.677544  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:51:55.757544  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:51:58.829552  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:52:04.909569  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:52:07.981555  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:52:14.061549  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:52:17.133576  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:52:23.213558  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:52:26.285482  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:52:32.365550  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:52:35.437574  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:52:41.517473  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:52:44.589458  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:52:50.669534  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:52:53.741496  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:52:59.821528  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:53:02.893489  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:53:08.973534  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:53:12.045527  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:53:18.125473  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:53:21.197472  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:53:27.277533  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:53:30.349580  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:53:36.429514  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:53:39.501584  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:53:45.581524  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:53:48.653547  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:53:54.733543  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:53:57.805491  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:54:03.885571  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:54:06.957565  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:54:13.037470  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:54:16.109461  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:54:22.189477  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:54:25.261563  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:54:31.341534  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:54:34.413513  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:54:40.493530  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:54:43.497878  115217 start.go:369] acquired machines lock for "old-k8s-version-448851" in 4m25.369261381s
	I1206 19:54:43.497937  115217 start.go:96] Skipping create...Using existing machine configuration
	I1206 19:54:43.497949  115217 fix.go:54] fixHost starting: 
	I1206 19:54:43.498301  115217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:54:43.498331  115217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:54:43.513072  115217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33051
	I1206 19:54:43.513520  115217 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:54:43.514005  115217 main.go:141] libmachine: Using API Version  1
	I1206 19:54:43.514035  115217 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:54:43.514375  115217 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:54:43.514571  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .DriverName
	I1206 19:54:43.514716  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetState
	I1206 19:54:43.516245  115217 fix.go:102] recreateIfNeeded on old-k8s-version-448851: state=Stopped err=<nil>
	I1206 19:54:43.516266  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .DriverName
	W1206 19:54:43.516391  115217 fix.go:128] unexpected machine state, will restart: <nil>
	I1206 19:54:43.518413  115217 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-448851" ...
	I1206 19:54:43.495395  115078 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1206 19:54:43.495445  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHHostname
	I1206 19:54:43.497720  115078 machine.go:91] provisioned docker machine in 4m37.37101565s
	I1206 19:54:43.497766  115078 fix.go:56] fixHost completed within 4m37.395231745s
	I1206 19:54:43.497773  115078 start.go:83] releasing machines lock for "no-preload-989559", held for 4m37.395253694s
	W1206 19:54:43.497813  115078 start.go:694] error starting host: provision: host is not running
	W1206 19:54:43.497949  115078 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1206 19:54:43.497960  115078 start.go:709] Will try again in 5 seconds ...
	I1206 19:54:43.519752  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .Start
	I1206 19:54:43.519905  115217 main.go:141] libmachine: (old-k8s-version-448851) Ensuring networks are active...
	I1206 19:54:43.520785  115217 main.go:141] libmachine: (old-k8s-version-448851) Ensuring network default is active
	I1206 19:54:43.521155  115217 main.go:141] libmachine: (old-k8s-version-448851) Ensuring network mk-old-k8s-version-448851 is active
	I1206 19:54:43.521477  115217 main.go:141] libmachine: (old-k8s-version-448851) Getting domain xml...
	I1206 19:54:43.522123  115217 main.go:141] libmachine: (old-k8s-version-448851) Creating domain...
	I1206 19:54:44.758967  115217 main.go:141] libmachine: (old-k8s-version-448851) Waiting to get IP...
	I1206 19:54:44.759812  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:54:44.760194  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | unable to find current IP address of domain old-k8s-version-448851 in network mk-old-k8s-version-448851
	I1206 19:54:44.760255  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | I1206 19:54:44.760156  116186 retry.go:31] will retry after 298.997725ms: waiting for machine to come up
	I1206 19:54:45.061071  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:54:45.061521  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | unable to find current IP address of domain old-k8s-version-448851 in network mk-old-k8s-version-448851
	I1206 19:54:45.061545  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | I1206 19:54:45.061474  116186 retry.go:31] will retry after 338.263286ms: waiting for machine to come up
	I1206 19:54:45.401161  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:54:45.401614  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | unable to find current IP address of domain old-k8s-version-448851 in network mk-old-k8s-version-448851
	I1206 19:54:45.401641  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | I1206 19:54:45.401572  116186 retry.go:31] will retry after 468.987471ms: waiting for machine to come up
	I1206 19:54:45.872203  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:54:45.872644  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | unable to find current IP address of domain old-k8s-version-448851 in network mk-old-k8s-version-448851
	I1206 19:54:45.872675  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | I1206 19:54:45.872586  116186 retry.go:31] will retry after 447.252306ms: waiting for machine to come up
	I1206 19:54:46.321277  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:54:46.321583  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | unable to find current IP address of domain old-k8s-version-448851 in network mk-old-k8s-version-448851
	I1206 19:54:46.321609  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | I1206 19:54:46.321549  116186 retry.go:31] will retry after 591.206607ms: waiting for machine to come up
	I1206 19:54:46.913936  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:54:46.914351  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | unable to find current IP address of domain old-k8s-version-448851 in network mk-old-k8s-version-448851
	I1206 19:54:46.914412  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | I1206 19:54:46.914260  116186 retry.go:31] will retry after 888.979547ms: waiting for machine to come up
	I1206 19:54:47.805332  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:54:47.805783  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | unable to find current IP address of domain old-k8s-version-448851 in network mk-old-k8s-version-448851
	I1206 19:54:47.805814  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | I1206 19:54:47.805722  116186 retry.go:31] will retry after 1.088490978s: waiting for machine to come up
	I1206 19:54:48.499603  115078 start.go:365] acquiring machines lock for no-preload-989559: {Name:mk49ce640266d8c664a871ed4989f65c26b6fae1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1206 19:54:48.895892  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:54:48.896316  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | unable to find current IP address of domain old-k8s-version-448851 in network mk-old-k8s-version-448851
	I1206 19:54:48.896347  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | I1206 19:54:48.896249  116186 retry.go:31] will retry after 1.145932913s: waiting for machine to come up
	I1206 19:54:50.043740  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:54:50.044169  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | unable to find current IP address of domain old-k8s-version-448851 in network mk-old-k8s-version-448851
	I1206 19:54:50.044199  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | I1206 19:54:50.044136  116186 retry.go:31] will retry after 1.302468984s: waiting for machine to come up
	I1206 19:54:51.347696  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:54:51.348093  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | unable to find current IP address of domain old-k8s-version-448851 in network mk-old-k8s-version-448851
	I1206 19:54:51.348124  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | I1206 19:54:51.348039  116186 retry.go:31] will retry after 2.099836852s: waiting for machine to come up
	I1206 19:54:53.450166  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:54:53.450638  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | unable to find current IP address of domain old-k8s-version-448851 in network mk-old-k8s-version-448851
	I1206 19:54:53.450678  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | I1206 19:54:53.450566  116186 retry.go:31] will retry after 1.877757048s: waiting for machine to come up
	I1206 19:54:55.331257  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:54:55.331697  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | unable to find current IP address of domain old-k8s-version-448851 in network mk-old-k8s-version-448851
	I1206 19:54:55.331752  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | I1206 19:54:55.331671  116186 retry.go:31] will retry after 3.399849348s: waiting for machine to come up
	I1206 19:54:58.733325  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:54:58.733712  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | unable to find current IP address of domain old-k8s-version-448851 in network mk-old-k8s-version-448851
	I1206 19:54:58.733736  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | I1206 19:54:58.733664  116186 retry.go:31] will retry after 4.308323214s: waiting for machine to come up
	I1206 19:55:04.350333  115497 start.go:369] acquired machines lock for "default-k8s-diff-port-380424" in 4m17.791271724s
	I1206 19:55:04.350411  115497 start.go:96] Skipping create...Using existing machine configuration
	I1206 19:55:04.350426  115497 fix.go:54] fixHost starting: 
	I1206 19:55:04.350878  115497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:55:04.350927  115497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:55:04.367462  115497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36653
	I1206 19:55:04.367935  115497 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:55:04.368546  115497 main.go:141] libmachine: Using API Version  1
	I1206 19:55:04.368580  115497 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:55:04.368972  115497 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:55:04.369197  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .DriverName
	I1206 19:55:04.369417  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetState
	I1206 19:55:04.370940  115497 fix.go:102] recreateIfNeeded on default-k8s-diff-port-380424: state=Stopped err=<nil>
	I1206 19:55:04.370982  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .DriverName
	W1206 19:55:04.371135  115497 fix.go:128] unexpected machine state, will restart: <nil>
	I1206 19:55:04.373809  115497 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-380424" ...
	I1206 19:55:03.047055  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.047484  115217 main.go:141] libmachine: (old-k8s-version-448851) Found IP for machine: 192.168.61.33
	I1206 19:55:03.047516  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has current primary IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.047527  115217 main.go:141] libmachine: (old-k8s-version-448851) Reserving static IP address...
	I1206 19:55:03.048083  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "old-k8s-version-448851", mac: "52:54:00:91:ad:26", ip: "192.168.61.33"} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:03.048116  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | skip adding static IP to network mk-old-k8s-version-448851 - found existing host DHCP lease matching {name: "old-k8s-version-448851", mac: "52:54:00:91:ad:26", ip: "192.168.61.33"}
	I1206 19:55:03.048135  115217 main.go:141] libmachine: (old-k8s-version-448851) Reserved static IP address: 192.168.61.33
	I1206 19:55:03.048146  115217 main.go:141] libmachine: (old-k8s-version-448851) Waiting for SSH to be available...
	I1206 19:55:03.048158  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | Getting to WaitForSSH function...
	I1206 19:55:03.050347  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.050661  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:03.050682  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.050793  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | Using SSH client type: external
	I1206 19:55:03.050872  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | Using SSH private key: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/old-k8s-version-448851/id_rsa (-rw-------)
	I1206 19:55:03.050913  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.33 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17740-63652/.minikube/machines/old-k8s-version-448851/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1206 19:55:03.050935  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | About to run SSH command:
	I1206 19:55:03.050956  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | exit 0
	I1206 19:55:03.137326  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | SSH cmd err, output: <nil>: 
	I1206 19:55:03.137753  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetConfigRaw
	I1206 19:55:03.138415  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetIP
	I1206 19:55:03.140903  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.141314  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:03.141341  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.141671  115217 profile.go:148] Saving config to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/old-k8s-version-448851/config.json ...
	I1206 19:55:03.141899  115217 machine.go:88] provisioning docker machine ...
	I1206 19:55:03.141924  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .DriverName
	I1206 19:55:03.142133  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetMachineName
	I1206 19:55:03.142284  115217 buildroot.go:166] provisioning hostname "old-k8s-version-448851"
	I1206 19:55:03.142305  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetMachineName
	I1206 19:55:03.142511  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHHostname
	I1206 19:55:03.144778  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.145119  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:03.145144  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.145289  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHPort
	I1206 19:55:03.145451  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 19:55:03.145582  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 19:55:03.145705  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHUsername
	I1206 19:55:03.145829  115217 main.go:141] libmachine: Using SSH client type: native
	I1206 19:55:03.146319  115217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.33 22 <nil> <nil>}
	I1206 19:55:03.146343  115217 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-448851 && echo "old-k8s-version-448851" | sudo tee /etc/hostname
	I1206 19:55:03.270447  115217 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-448851
	
	I1206 19:55:03.270490  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHHostname
	I1206 19:55:03.273453  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.273769  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:03.273802  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.273957  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHPort
	I1206 19:55:03.274148  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 19:55:03.274326  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 19:55:03.274426  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHUsername
	I1206 19:55:03.274576  115217 main.go:141] libmachine: Using SSH client type: native
	I1206 19:55:03.274893  115217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.33 22 <nil> <nil>}
	I1206 19:55:03.274910  115217 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-448851' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-448851/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-448851' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 19:55:03.395200  115217 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1206 19:55:03.395232  115217 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17740-63652/.minikube CaCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17740-63652/.minikube}
	I1206 19:55:03.395281  115217 buildroot.go:174] setting up certificates
	I1206 19:55:03.395298  115217 provision.go:83] configureAuth start
	I1206 19:55:03.395320  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetMachineName
	I1206 19:55:03.395585  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetIP
	I1206 19:55:03.397989  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.398373  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:03.398405  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.398547  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHHostname
	I1206 19:55:03.400869  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.401196  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:03.401223  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.401369  115217 provision.go:138] copyHostCerts
	I1206 19:55:03.401492  115217 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem, removing ...
	I1206 19:55:03.401513  115217 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem
	I1206 19:55:03.401600  115217 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem (1082 bytes)
	I1206 19:55:03.401718  115217 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem, removing ...
	I1206 19:55:03.401730  115217 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem
	I1206 19:55:03.401778  115217 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem (1123 bytes)
	I1206 19:55:03.401857  115217 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem, removing ...
	I1206 19:55:03.401867  115217 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem
	I1206 19:55:03.401899  115217 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem (1679 bytes)
	I1206 19:55:03.401971  115217 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-448851 san=[192.168.61.33 192.168.61.33 localhost 127.0.0.1 minikube old-k8s-version-448851]
	I1206 19:55:03.655010  115217 provision.go:172] copyRemoteCerts
	I1206 19:55:03.655082  115217 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 19:55:03.655110  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHHostname
	I1206 19:55:03.657860  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.658301  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:03.658336  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.658529  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHPort
	I1206 19:55:03.658738  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 19:55:03.658914  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHUsername
	I1206 19:55:03.659068  115217 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/old-k8s-version-448851/id_rsa Username:docker}
	I1206 19:55:03.742021  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 19:55:03.765284  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1206 19:55:03.788562  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 19:55:03.811692  115217 provision.go:86] duration metric: configureAuth took 416.376347ms
	I1206 19:55:03.811722  115217 buildroot.go:189] setting minikube options for container-runtime
	I1206 19:55:03.811943  115217 config.go:182] Loaded profile config "old-k8s-version-448851": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1206 19:55:03.812058  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHHostname
	I1206 19:55:03.814518  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.814898  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:03.814934  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.815149  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHPort
	I1206 19:55:03.815371  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 19:55:03.815541  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 19:55:03.815663  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHUsername
	I1206 19:55:03.815787  115217 main.go:141] libmachine: Using SSH client type: native
	I1206 19:55:03.816094  115217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.33 22 <nil> <nil>}
	I1206 19:55:03.816121  115217 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 19:55:04.115752  115217 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 19:55:04.115780  115217 machine.go:91] provisioned docker machine in 973.864642ms
	I1206 19:55:04.115790  115217 start.go:300] post-start starting for "old-k8s-version-448851" (driver="kvm2")
	I1206 19:55:04.115802  115217 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 19:55:04.115825  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .DriverName
	I1206 19:55:04.116197  115217 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 19:55:04.116226  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHHostname
	I1206 19:55:04.119234  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:04.119559  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:04.119586  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:04.119801  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHPort
	I1206 19:55:04.120047  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 19:55:04.120228  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHUsername
	I1206 19:55:04.120391  115217 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/old-k8s-version-448851/id_rsa Username:docker}
	I1206 19:55:04.203195  115217 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 19:55:04.207210  115217 info.go:137] Remote host: Buildroot 2021.02.12
	I1206 19:55:04.207238  115217 filesync.go:126] Scanning /home/jenkins/minikube-integration/17740-63652/.minikube/addons for local assets ...
	I1206 19:55:04.207315  115217 filesync.go:126] Scanning /home/jenkins/minikube-integration/17740-63652/.minikube/files for local assets ...
	I1206 19:55:04.207392  115217 filesync.go:149] local asset: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem -> 708342.pem in /etc/ssl/certs
	I1206 19:55:04.207475  115217 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 19:55:04.215469  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem --> /etc/ssl/certs/708342.pem (1708 bytes)
	I1206 19:55:04.238407  115217 start.go:303] post-start completed in 122.598676ms
	I1206 19:55:04.238437  115217 fix.go:56] fixHost completed within 20.740486511s
	I1206 19:55:04.238467  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHHostname
	I1206 19:55:04.241147  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:04.241522  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:04.241558  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:04.241720  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHPort
	I1206 19:55:04.241992  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 19:55:04.242187  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 19:55:04.242346  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHUsername
	I1206 19:55:04.242488  115217 main.go:141] libmachine: Using SSH client type: native
	I1206 19:55:04.242801  115217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.33 22 <nil> <nil>}
	I1206 19:55:04.242813  115217 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1206 19:55:04.350154  115217 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701892504.298339573
	
	I1206 19:55:04.350177  115217 fix.go:206] guest clock: 1701892504.298339573
	I1206 19:55:04.350185  115217 fix.go:219] Guest: 2023-12-06 19:55:04.298339573 +0000 UTC Remote: 2023-12-06 19:55:04.238442081 +0000 UTC m=+286.264851054 (delta=59.897492ms)
	I1206 19:55:04.350206  115217 fix.go:190] guest clock delta is within tolerance: 59.897492ms
	I1206 19:55:04.350212  115217 start.go:83] releasing machines lock for "old-k8s-version-448851", held for 20.852295937s
	I1206 19:55:04.350240  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .DriverName
	I1206 19:55:04.350562  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetIP
	I1206 19:55:04.353070  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:04.353519  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:04.353547  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:04.353732  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .DriverName
	I1206 19:55:04.354331  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .DriverName
	I1206 19:55:04.354552  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .DriverName
	I1206 19:55:04.354641  115217 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 19:55:04.354689  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHHostname
	I1206 19:55:04.354815  115217 ssh_runner.go:195] Run: cat /version.json
	I1206 19:55:04.354844  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHHostname
	I1206 19:55:04.357316  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:04.357558  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:04.357703  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:04.357734  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:04.357841  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHPort
	I1206 19:55:04.358006  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:04.358031  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 19:55:04.358052  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:04.358161  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHPort
	I1206 19:55:04.358241  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHUsername
	I1206 19:55:04.358322  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 19:55:04.358448  115217 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/old-k8s-version-448851/id_rsa Username:docker}
	I1206 19:55:04.358575  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHUsername
	I1206 19:55:04.358734  115217 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/old-k8s-version-448851/id_rsa Username:docker}
	I1206 19:55:04.469402  115217 ssh_runner.go:195] Run: systemctl --version
	I1206 19:55:04.475231  115217 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 19:55:04.618312  115217 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 19:55:04.625482  115217 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 19:55:04.625557  115217 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 19:55:04.640251  115217 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1206 19:55:04.640281  115217 start.go:475] detecting cgroup driver to use...
	I1206 19:55:04.640368  115217 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 19:55:04.654153  115217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 19:55:04.666295  115217 docker.go:203] disabling cri-docker service (if available) ...
	I1206 19:55:04.666387  115217 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 19:55:04.678579  115217 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 19:55:04.692472  115217 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 19:55:04.793090  115217 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 19:55:04.909331  115217 docker.go:219] disabling docker service ...
	I1206 19:55:04.909399  115217 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 19:55:04.922479  115217 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 19:55:04.934122  115217 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 19:55:05.048844  115217 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 19:55:05.156415  115217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 19:55:05.172525  115217 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 19:55:05.190303  115217 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1206 19:55:05.190363  115217 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:55:05.199967  115217 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 19:55:05.200048  115217 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:55:05.209725  115217 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:55:05.218770  115217 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:55:05.227835  115217 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 19:55:05.237006  115217 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 19:55:05.244839  115217 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1206 19:55:05.244899  115217 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1206 19:55:05.256528  115217 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 19:55:05.266360  115217 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 19:55:05.387203  115217 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 19:55:05.555553  115217 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 19:55:05.555668  115217 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 19:55:05.564619  115217 start.go:543] Will wait 60s for crictl version
	I1206 19:55:05.564682  115217 ssh_runner.go:195] Run: which crictl
	I1206 19:55:05.568979  115217 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1206 19:55:05.611883  115217 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1206 19:55:05.611986  115217 ssh_runner.go:195] Run: crio --version
	I1206 19:55:05.666757  115217 ssh_runner.go:195] Run: crio --version
	I1206 19:55:05.725942  115217 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1206 19:55:04.375626  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .Start
	I1206 19:55:04.375819  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Ensuring networks are active...
	I1206 19:55:04.376548  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Ensuring network default is active
	I1206 19:55:04.376923  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Ensuring network mk-default-k8s-diff-port-380424 is active
	I1206 19:55:04.377416  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Getting domain xml...
	I1206 19:55:04.378003  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Creating domain...
	I1206 19:55:05.667493  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting to get IP...
	I1206 19:55:05.668629  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:05.669112  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | unable to find current IP address of domain default-k8s-diff-port-380424 in network mk-default-k8s-diff-port-380424
	I1206 19:55:05.669148  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | I1206 19:55:05.669064  116315 retry.go:31] will retry after 259.414087ms: waiting for machine to come up
	I1206 19:55:05.930773  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:05.931201  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | unable to find current IP address of domain default-k8s-diff-port-380424 in network mk-default-k8s-diff-port-380424
	I1206 19:55:05.931232  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | I1206 19:55:05.931129  116315 retry.go:31] will retry after 319.702286ms: waiting for machine to come up
	I1206 19:55:06.252911  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:06.253423  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | unable to find current IP address of domain default-k8s-diff-port-380424 in network mk-default-k8s-diff-port-380424
	I1206 19:55:06.253458  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | I1206 19:55:06.253363  116315 retry.go:31] will retry after 403.286071ms: waiting for machine to come up
	I1206 19:55:05.727444  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetIP
	I1206 19:55:05.730503  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:05.730864  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:05.730900  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:05.731151  115217 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1206 19:55:05.735826  115217 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 19:55:05.748254  115217 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1206 19:55:05.748312  115217 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 19:55:05.799380  115217 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1206 19:55:05.799468  115217 ssh_runner.go:195] Run: which lz4
	I1206 19:55:05.803715  115217 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1206 19:55:05.808059  115217 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1206 19:55:05.808093  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1206 19:55:07.624367  115217 crio.go:444] Took 1.820689 seconds to copy over tarball
	I1206 19:55:07.624452  115217 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1206 19:55:06.658075  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:06.658763  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | unable to find current IP address of domain default-k8s-diff-port-380424 in network mk-default-k8s-diff-port-380424
	I1206 19:55:06.658800  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | I1206 19:55:06.658710  116315 retry.go:31] will retry after 572.663186ms: waiting for machine to come up
	I1206 19:55:07.233562  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:07.233898  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | unable to find current IP address of domain default-k8s-diff-port-380424 in network mk-default-k8s-diff-port-380424
	I1206 19:55:07.233927  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | I1206 19:55:07.233861  116315 retry.go:31] will retry after 762.563485ms: waiting for machine to come up
	I1206 19:55:07.997980  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:07.998424  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | unable to find current IP address of domain default-k8s-diff-port-380424 in network mk-default-k8s-diff-port-380424
	I1206 19:55:07.998453  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | I1206 19:55:07.998368  116315 retry.go:31] will retry after 885.694635ms: waiting for machine to come up
	I1206 19:55:08.885521  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:08.885957  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | unable to find current IP address of domain default-k8s-diff-port-380424 in network mk-default-k8s-diff-port-380424
	I1206 19:55:08.885983  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | I1206 19:55:08.885918  116315 retry.go:31] will retry after 924.594214ms: waiting for machine to come up
	I1206 19:55:09.812796  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:09.813271  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | unable to find current IP address of domain default-k8s-diff-port-380424 in network mk-default-k8s-diff-port-380424
	I1206 19:55:09.813305  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | I1206 19:55:09.813205  116315 retry.go:31] will retry after 1.485258028s: waiting for machine to come up
	I1206 19:55:11.300830  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:11.301385  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | unable to find current IP address of domain default-k8s-diff-port-380424 in network mk-default-k8s-diff-port-380424
	I1206 19:55:11.301424  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | I1206 19:55:11.301315  116315 retry.go:31] will retry after 1.232055429s: waiting for machine to come up
	I1206 19:55:10.452537  115217 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.828052972s)
	I1206 19:55:10.452565  115217 crio.go:451] Took 2.828166 seconds to extract the tarball
	I1206 19:55:10.452574  115217 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1206 19:55:10.493620  115217 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 19:55:10.539181  115217 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1206 19:55:10.539218  115217 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1206 19:55:10.539312  115217 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1206 19:55:10.539318  115217 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 19:55:10.539358  115217 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1206 19:55:10.539364  115217 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1206 19:55:10.539515  115217 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1206 19:55:10.539529  115217 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1206 19:55:10.539331  115217 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1206 19:55:10.539572  115217 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1206 19:55:10.540875  115217 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1206 19:55:10.540888  115217 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1206 19:55:10.540931  115217 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1206 19:55:10.540936  115217 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1206 19:55:10.540879  115217 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1206 19:55:10.540875  115217 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1206 19:55:10.540880  115217 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1206 19:55:10.540879  115217 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 19:55:10.725027  115217 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1206 19:55:10.762761  115217 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1206 19:55:10.762810  115217 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1206 19:55:10.762862  115217 ssh_runner.go:195] Run: which crictl
	I1206 19:55:10.763731  115217 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 19:55:10.766312  115217 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1206 19:55:10.768181  115217 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1206 19:55:10.773115  115217 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1206 19:55:10.829543  115217 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1206 19:55:10.841186  115217 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1206 19:55:10.856309  115217 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1206 19:55:10.873212  115217 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1206 19:55:10.983390  115217 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1206 19:55:10.983444  115217 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1206 19:55:10.983463  115217 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1206 19:55:10.983498  115217 ssh_runner.go:195] Run: which crictl
	I1206 19:55:10.983510  115217 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1206 19:55:10.983530  115217 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1206 19:55:10.983564  115217 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1206 19:55:10.983628  115217 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1206 19:55:10.983663  115217 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1206 19:55:10.983672  115217 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1206 19:55:10.983700  115217 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1206 19:55:10.983712  115217 ssh_runner.go:195] Run: which crictl
	I1206 19:55:10.983567  115217 ssh_runner.go:195] Run: which crictl
	I1206 19:55:10.983730  115217 ssh_runner.go:195] Run: which crictl
	I1206 19:55:10.983802  115217 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1206 19:55:10.983829  115217 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1206 19:55:10.983861  115217 ssh_runner.go:195] Run: which crictl
	I1206 19:55:11.009102  115217 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1206 19:55:11.009135  115217 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1206 19:55:11.009152  115217 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1206 19:55:11.009211  115217 ssh_runner.go:195] Run: which crictl
	I1206 19:55:11.009254  115217 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1206 19:55:11.009273  115217 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1206 19:55:11.009307  115217 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1206 19:55:11.009342  115217 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1206 19:55:11.009355  115217 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1206 19:55:11.009388  115217 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1206 19:55:11.009390  115217 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1206 19:55:11.130238  115217 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1206 19:55:11.158336  115217 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1206 19:55:11.158375  115217 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1206 19:55:11.158431  115217 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1206 19:55:11.158438  115217 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1206 19:55:11.158507  115217 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1206 19:55:12.535831  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:12.536331  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | unable to find current IP address of domain default-k8s-diff-port-380424 in network mk-default-k8s-diff-port-380424
	I1206 19:55:12.536374  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | I1206 19:55:12.536253  116315 retry.go:31] will retry after 1.865303927s: waiting for machine to come up
	I1206 19:55:14.402935  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:14.403326  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | unable to find current IP address of domain default-k8s-diff-port-380424 in network mk-default-k8s-diff-port-380424
	I1206 19:55:14.403354  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | I1206 19:55:14.403268  116315 retry.go:31] will retry after 1.960994282s: waiting for machine to come up
	I1206 19:55:16.366289  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:16.366763  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | unable to find current IP address of domain default-k8s-diff-port-380424 in network mk-default-k8s-diff-port-380424
	I1206 19:55:16.366792  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | I1206 19:55:16.366689  116315 retry.go:31] will retry after 2.933451161s: waiting for machine to come up
	I1206 19:55:13.478881  115217 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0: (2.320421557s)
	I1206 19:55:13.478937  115217 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1206 19:55:13.478892  115217 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (2.469478111s)
	I1206 19:55:13.478983  115217 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1206 19:55:13.479043  115217 cache_images.go:92] LoadImages completed in 2.939808867s
	W1206 19:55:13.479149  115217 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0: no such file or directory
	I1206 19:55:13.479228  115217 ssh_runner.go:195] Run: crio config
	I1206 19:55:13.543270  115217 cni.go:84] Creating CNI manager for ""
	I1206 19:55:13.543302  115217 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 19:55:13.543328  115217 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1206 19:55:13.543355  115217 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.33 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-448851 NodeName:old-k8s-version-448851 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.33"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.33 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1206 19:55:13.543557  115217 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.33
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-448851"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.33
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.33"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-448851
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.61.33:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 19:55:13.543700  115217 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-448851 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.33
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-448851 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1206 19:55:13.543776  115217 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1206 19:55:13.554524  115217 binaries.go:44] Found k8s binaries, skipping transfer
	I1206 19:55:13.554611  115217 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 19:55:13.566752  115217 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I1206 19:55:13.586027  115217 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 19:55:13.603800  115217 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I1206 19:55:13.627098  115217 ssh_runner.go:195] Run: grep 192.168.61.33	control-plane.minikube.internal$ /etc/hosts
	I1206 19:55:13.632470  115217 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.33	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 19:55:13.651452  115217 certs.go:56] Setting up /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/old-k8s-version-448851 for IP: 192.168.61.33
	I1206 19:55:13.651507  115217 certs.go:190] acquiring lock for shared ca certs: {Name:mkf8fbf7b590617ef4dc6c3a4acb742ae26f89ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 19:55:13.651670  115217 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.key
	I1206 19:55:13.651748  115217 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.key
	I1206 19:55:13.651860  115217 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/old-k8s-version-448851/client.key
	I1206 19:55:13.651932  115217 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/old-k8s-version-448851/apiserver.key.efa8c2ad
	I1206 19:55:13.651994  115217 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/old-k8s-version-448851/proxy-client.key
	I1206 19:55:13.652142  115217 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834.pem (1338 bytes)
	W1206 19:55:13.652183  115217 certs.go:433] ignoring /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834_empty.pem, impossibly tiny 0 bytes
	I1206 19:55:13.652201  115217 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 19:55:13.652241  115217 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem (1082 bytes)
	I1206 19:55:13.652283  115217 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem (1123 bytes)
	I1206 19:55:13.652326  115217 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem (1679 bytes)
	I1206 19:55:13.652389  115217 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem (1708 bytes)
	I1206 19:55:13.653344  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/old-k8s-version-448851/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1206 19:55:13.687786  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/old-k8s-version-448851/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 19:55:13.723604  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/old-k8s-version-448851/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 19:55:13.756434  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/old-k8s-version-448851/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 19:55:13.789066  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 19:55:13.821087  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 19:55:13.849840  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 19:55:13.876520  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 19:55:13.901763  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem --> /usr/share/ca-certificates/708342.pem (1708 bytes)
	I1206 19:55:13.932106  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 19:55:13.961708  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834.pem --> /usr/share/ca-certificates/70834.pem (1338 bytes)
	I1206 19:55:13.991586  115217 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 19:55:14.009848  115217 ssh_runner.go:195] Run: openssl version
	I1206 19:55:14.017661  115217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/708342.pem && ln -fs /usr/share/ca-certificates/708342.pem /etc/ssl/certs/708342.pem"
	I1206 19:55:14.031103  115217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/708342.pem
	I1206 19:55:14.037142  115217 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  6 18:50 /usr/share/ca-certificates/708342.pem
	I1206 19:55:14.037212  115217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/708342.pem
	I1206 19:55:14.044737  115217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/708342.pem /etc/ssl/certs/3ec20f2e.0"
	I1206 19:55:14.058296  115217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1206 19:55:14.068591  115217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:55:14.073995  115217 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  6 18:41 /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:55:14.074067  115217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:55:14.079922  115217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1206 19:55:14.090541  115217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/70834.pem && ln -fs /usr/share/ca-certificates/70834.pem /etc/ssl/certs/70834.pem"
	I1206 19:55:14.100915  115217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/70834.pem
	I1206 19:55:14.106692  115217 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  6 18:50 /usr/share/ca-certificates/70834.pem
	I1206 19:55:14.106766  115217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/70834.pem
	I1206 19:55:14.112592  115217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/70834.pem /etc/ssl/certs/51391683.0"
	I1206 19:55:14.122630  115217 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1206 19:55:14.128544  115217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 19:55:14.136649  115217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 19:55:14.143060  115217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 19:55:14.151002  115217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 19:55:14.157202  115217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 19:55:14.163456  115217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 19:55:14.171607  115217 kubeadm.go:404] StartCluster: {Name:old-k8s-version-448851 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-448851 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.33 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 19:55:14.171720  115217 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 19:55:14.171771  115217 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 19:55:14.216630  115217 cri.go:89] found id: ""
	I1206 19:55:14.216712  115217 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 19:55:14.229800  115217 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1206 19:55:14.229832  115217 kubeadm.go:636] restartCluster start
	I1206 19:55:14.229889  115217 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 19:55:14.242347  115217 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:14.243973  115217 kubeconfig.go:92] found "old-k8s-version-448851" server: "https://192.168.61.33:8443"
	I1206 19:55:14.247781  115217 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 19:55:14.257060  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:14.257118  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:14.268619  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:14.268644  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:14.268692  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:14.279803  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:14.780509  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:14.780603  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:14.796116  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:15.280797  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:15.280910  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:15.296260  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:15.779895  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:15.780023  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:15.796115  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:16.280792  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:16.280884  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:16.297258  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:16.780884  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:16.781007  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:16.796430  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:17.279982  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:17.280088  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:17.291102  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:17.780721  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:17.780865  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:17.792253  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:19.302288  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:19.302717  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | unable to find current IP address of domain default-k8s-diff-port-380424 in network mk-default-k8s-diff-port-380424
	I1206 19:55:19.302744  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | I1206 19:55:19.302670  116315 retry.go:31] will retry after 3.226665023s: waiting for machine to come up
	I1206 19:55:18.280684  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:18.280777  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:18.292535  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:18.780650  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:18.780722  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:18.793872  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:19.280431  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:19.280507  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:19.292188  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:19.780793  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:19.780914  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:19.791873  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:20.280527  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:20.280637  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:20.291886  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:20.780810  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:20.780890  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:20.791837  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:21.280389  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:21.280479  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:21.291743  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:21.780252  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:21.780343  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:21.791452  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:22.280013  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:22.280120  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:22.291240  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:22.780451  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:22.780528  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:22.791668  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:23.690245  115591 start.go:369] acquired machines lock for "embed-certs-209025" in 4m34.06740814s
	I1206 19:55:23.690318  115591 start.go:96] Skipping create...Using existing machine configuration
	I1206 19:55:23.690327  115591 fix.go:54] fixHost starting: 
	I1206 19:55:23.690686  115591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:55:23.690728  115591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:55:23.706483  115591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35135
	I1206 19:55:23.706891  115591 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:55:23.707367  115591 main.go:141] libmachine: Using API Version  1
	I1206 19:55:23.707391  115591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:55:23.707744  115591 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:55:23.707925  115591 main.go:141] libmachine: (embed-certs-209025) Calling .DriverName
	I1206 19:55:23.708059  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetState
	I1206 19:55:23.709586  115591 fix.go:102] recreateIfNeeded on embed-certs-209025: state=Stopped err=<nil>
	I1206 19:55:23.709612  115591 main.go:141] libmachine: (embed-certs-209025) Calling .DriverName
	W1206 19:55:23.709803  115591 fix.go:128] unexpected machine state, will restart: <nil>
	I1206 19:55:23.712015  115591 out.go:177] * Restarting existing kvm2 VM for "embed-certs-209025" ...
	I1206 19:55:23.713472  115591 main.go:141] libmachine: (embed-certs-209025) Calling .Start
	I1206 19:55:23.713637  115591 main.go:141] libmachine: (embed-certs-209025) Ensuring networks are active...
	I1206 19:55:23.714335  115591 main.go:141] libmachine: (embed-certs-209025) Ensuring network default is active
	I1206 19:55:23.714639  115591 main.go:141] libmachine: (embed-certs-209025) Ensuring network mk-embed-certs-209025 is active
	I1206 19:55:23.714978  115591 main.go:141] libmachine: (embed-certs-209025) Getting domain xml...
	I1206 19:55:23.715617  115591 main.go:141] libmachine: (embed-certs-209025) Creating domain...
	I1206 19:55:22.530618  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.531092  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has current primary IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.531107  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Found IP for machine: 192.168.72.22
	I1206 19:55:22.531117  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Reserving static IP address...
	I1206 19:55:22.531437  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-380424", mac: "52:54:00:15:24:2b", ip: "192.168.72.22"} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:22.531465  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | skip adding static IP to network mk-default-k8s-diff-port-380424 - found existing host DHCP lease matching {name: "default-k8s-diff-port-380424", mac: "52:54:00:15:24:2b", ip: "192.168.72.22"}
	I1206 19:55:22.531485  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | Getting to WaitForSSH function...
	I1206 19:55:22.531496  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Reserved static IP address: 192.168.72.22
	I1206 19:55:22.531554  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for SSH to be available...
	I1206 19:55:22.533485  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.533729  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:22.533752  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.533853  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | Using SSH client type: external
	I1206 19:55:22.533880  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | Using SSH private key: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/default-k8s-diff-port-380424/id_rsa (-rw-------)
	I1206 19:55:22.533916  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.22 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17740-63652/.minikube/machines/default-k8s-diff-port-380424/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1206 19:55:22.533941  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | About to run SSH command:
	I1206 19:55:22.533957  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | exit 0
	I1206 19:55:22.620864  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | SSH cmd err, output: <nil>: 
	I1206 19:55:22.621194  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetConfigRaw
	I1206 19:55:22.621844  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetIP
	I1206 19:55:22.624194  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.624565  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:22.624599  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.624876  115497 profile.go:148] Saving config to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/default-k8s-diff-port-380424/config.json ...
	I1206 19:55:22.625062  115497 machine.go:88] provisioning docker machine ...
	I1206 19:55:22.625081  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .DriverName
	I1206 19:55:22.625310  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetMachineName
	I1206 19:55:22.625481  115497 buildroot.go:166] provisioning hostname "default-k8s-diff-port-380424"
	I1206 19:55:22.625502  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetMachineName
	I1206 19:55:22.625635  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHHostname
	I1206 19:55:22.627886  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.628227  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:22.628255  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.628352  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHPort
	I1206 19:55:22.628499  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 19:55:22.628658  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 19:55:22.628784  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHUsername
	I1206 19:55:22.628940  115497 main.go:141] libmachine: Using SSH client type: native
	I1206 19:55:22.629440  115497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.22 22 <nil> <nil>}
	I1206 19:55:22.629462  115497 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-380424 && echo "default-k8s-diff-port-380424" | sudo tee /etc/hostname
	I1206 19:55:22.753829  115497 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-380424
	
	I1206 19:55:22.753867  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHHostname
	I1206 19:55:22.756620  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.756958  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:22.757001  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.757129  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHPort
	I1206 19:55:22.757375  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 19:55:22.757558  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 19:55:22.757700  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHUsername
	I1206 19:55:22.757868  115497 main.go:141] libmachine: Using SSH client type: native
	I1206 19:55:22.758197  115497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.22 22 <nil> <nil>}
	I1206 19:55:22.758252  115497 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-380424' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-380424/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-380424' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 19:55:22.878138  115497 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1206 19:55:22.878175  115497 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17740-63652/.minikube CaCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17740-63652/.minikube}
	I1206 19:55:22.878202  115497 buildroot.go:174] setting up certificates
	I1206 19:55:22.878246  115497 provision.go:83] configureAuth start
	I1206 19:55:22.878259  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetMachineName
	I1206 19:55:22.878557  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetIP
	I1206 19:55:22.881145  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.881515  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:22.881547  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.881657  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHHostname
	I1206 19:55:22.883591  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.883943  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:22.883981  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.884062  115497 provision.go:138] copyHostCerts
	I1206 19:55:22.884122  115497 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem, removing ...
	I1206 19:55:22.884135  115497 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem
	I1206 19:55:22.884203  115497 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem (1082 bytes)
	I1206 19:55:22.884334  115497 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem, removing ...
	I1206 19:55:22.884346  115497 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem
	I1206 19:55:22.884375  115497 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem (1123 bytes)
	I1206 19:55:22.884446  115497 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem, removing ...
	I1206 19:55:22.884457  115497 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem
	I1206 19:55:22.884484  115497 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem (1679 bytes)
	I1206 19:55:22.884539  115497 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-380424 san=[192.168.72.22 192.168.72.22 localhost 127.0.0.1 minikube default-k8s-diff-port-380424]
	I1206 19:55:22.973559  115497 provision.go:172] copyRemoteCerts
	I1206 19:55:22.973627  115497 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 19:55:22.973660  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHHostname
	I1206 19:55:22.976374  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.976656  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:22.976695  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.976888  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHPort
	I1206 19:55:22.977068  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 19:55:22.977300  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHUsername
	I1206 19:55:22.977468  115497 sshutil.go:53] new ssh client: &{IP:192.168.72.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/default-k8s-diff-port-380424/id_rsa Username:docker}
	I1206 19:55:23.061925  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 19:55:23.085093  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1206 19:55:23.108283  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 19:55:23.131666  115497 provision.go:86] duration metric: configureAuth took 253.404471ms
	I1206 19:55:23.131701  115497 buildroot.go:189] setting minikube options for container-runtime
	I1206 19:55:23.131879  115497 config.go:182] Loaded profile config "default-k8s-diff-port-380424": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 19:55:23.131957  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHHostname
	I1206 19:55:23.134672  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:23.135033  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:23.135077  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:23.135214  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHPort
	I1206 19:55:23.135436  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 19:55:23.135622  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 19:55:23.135781  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHUsername
	I1206 19:55:23.135941  115497 main.go:141] libmachine: Using SSH client type: native
	I1206 19:55:23.136393  115497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.22 22 <nil> <nil>}
	I1206 19:55:23.136427  115497 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 19:55:23.445361  115497 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 19:55:23.445389  115497 machine.go:91] provisioned docker machine in 820.312346ms
	I1206 19:55:23.445404  115497 start.go:300] post-start starting for "default-k8s-diff-port-380424" (driver="kvm2")
	I1206 19:55:23.445418  115497 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 19:55:23.445457  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .DriverName
	I1206 19:55:23.445851  115497 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 19:55:23.445886  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHHostname
	I1206 19:55:23.448493  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:23.448851  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:23.448879  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:23.449021  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHPort
	I1206 19:55:23.449210  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 19:55:23.449408  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHUsername
	I1206 19:55:23.449562  115497 sshutil.go:53] new ssh client: &{IP:192.168.72.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/default-k8s-diff-port-380424/id_rsa Username:docker}
	I1206 19:55:23.535493  115497 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 19:55:23.539696  115497 info.go:137] Remote host: Buildroot 2021.02.12
	I1206 19:55:23.539718  115497 filesync.go:126] Scanning /home/jenkins/minikube-integration/17740-63652/.minikube/addons for local assets ...
	I1206 19:55:23.539780  115497 filesync.go:126] Scanning /home/jenkins/minikube-integration/17740-63652/.minikube/files for local assets ...
	I1206 19:55:23.539862  115497 filesync.go:149] local asset: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem -> 708342.pem in /etc/ssl/certs
	I1206 19:55:23.539968  115497 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 19:55:23.548629  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem --> /etc/ssl/certs/708342.pem (1708 bytes)
	I1206 19:55:23.572264  115497 start.go:303] post-start completed in 126.842848ms
	I1206 19:55:23.572287  115497 fix.go:56] fixHost completed within 19.221864403s
	I1206 19:55:23.572321  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHHostname
	I1206 19:55:23.575329  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:23.575695  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:23.575739  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:23.575890  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHPort
	I1206 19:55:23.576093  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 19:55:23.576272  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 19:55:23.576429  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHUsername
	I1206 19:55:23.576599  115497 main.go:141] libmachine: Using SSH client type: native
	I1206 19:55:23.577046  115497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.22 22 <nil> <nil>}
	I1206 19:55:23.577064  115497 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1206 19:55:23.690035  115497 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701892523.637580982
	
	I1206 19:55:23.690064  115497 fix.go:206] guest clock: 1701892523.637580982
	I1206 19:55:23.690084  115497 fix.go:219] Guest: 2023-12-06 19:55:23.637580982 +0000 UTC Remote: 2023-12-06 19:55:23.572291664 +0000 UTC m=+277.181979500 (delta=65.289318ms)
	I1206 19:55:23.690146  115497 fix.go:190] guest clock delta is within tolerance: 65.289318ms
	I1206 19:55:23.690159  115497 start.go:83] releasing machines lock for "default-k8s-diff-port-380424", held for 19.339778523s
	I1206 19:55:23.690192  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .DriverName
	I1206 19:55:23.690465  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetIP
	I1206 19:55:23.692996  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:23.693337  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:23.693369  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:23.693562  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .DriverName
	I1206 19:55:23.694057  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .DriverName
	I1206 19:55:23.694250  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .DriverName
	I1206 19:55:23.694336  115497 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 19:55:23.694390  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHHostname
	I1206 19:55:23.694463  115497 ssh_runner.go:195] Run: cat /version.json
	I1206 19:55:23.694486  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHHostname
	I1206 19:55:23.696938  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:23.697063  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:23.697363  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:23.697393  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:23.697473  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:23.697514  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHPort
	I1206 19:55:23.697593  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:23.697674  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 19:55:23.697675  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHPort
	I1206 19:55:23.697876  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHUsername
	I1206 19:55:23.697899  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 19:55:23.698044  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHUsername
	I1206 19:55:23.698038  115497 sshutil.go:53] new ssh client: &{IP:192.168.72.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/default-k8s-diff-port-380424/id_rsa Username:docker}
	I1206 19:55:23.698167  115497 sshutil.go:53] new ssh client: &{IP:192.168.72.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/default-k8s-diff-port-380424/id_rsa Username:docker}
	I1206 19:55:23.786973  115497 ssh_runner.go:195] Run: systemctl --version
	I1206 19:55:23.814262  115497 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 19:55:23.954235  115497 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 19:55:23.961434  115497 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 19:55:23.961523  115497 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 19:55:23.981459  115497 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1206 19:55:23.981488  115497 start.go:475] detecting cgroup driver to use...
	I1206 19:55:23.981550  115497 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 19:55:24.000294  115497 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 19:55:24.013738  115497 docker.go:203] disabling cri-docker service (if available) ...
	I1206 19:55:24.013799  115497 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 19:55:24.030844  115497 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 19:55:24.044583  115497 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 19:55:24.161979  115497 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 19:55:24.296507  115497 docker.go:219] disabling docker service ...
	I1206 19:55:24.296580  115497 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 19:55:24.311171  115497 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 19:55:24.323538  115497 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 19:55:24.440425  115497 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 19:55:24.570168  115497 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 19:55:24.583169  115497 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 19:55:24.600733  115497 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1206 19:55:24.600790  115497 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:55:24.610057  115497 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 19:55:24.610129  115497 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:55:24.621925  115497 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:55:24.631383  115497 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:55:24.640414  115497 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 19:55:24.649853  115497 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 19:55:24.657999  115497 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1206 19:55:24.658052  115497 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1206 19:55:24.672821  115497 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 19:55:24.681200  115497 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 19:55:24.812790  115497 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 19:55:24.989383  115497 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 19:55:24.989483  115497 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 19:55:24.995335  115497 start.go:543] Will wait 60s for crictl version
	I1206 19:55:24.995404  115497 ssh_runner.go:195] Run: which crictl
	I1206 19:55:24.999307  115497 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1206 19:55:25.038932  115497 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1206 19:55:25.039046  115497 ssh_runner.go:195] Run: crio --version
	I1206 19:55:25.085844  115497 ssh_runner.go:195] Run: crio --version
	I1206 19:55:25.148264  115497 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1206 19:55:25.149676  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetIP
	I1206 19:55:25.152759  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:25.153157  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:25.153201  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:25.153451  115497 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1206 19:55:25.157621  115497 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 19:55:25.173609  115497 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1206 19:55:25.173680  115497 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 19:55:25.223564  115497 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1206 19:55:25.223647  115497 ssh_runner.go:195] Run: which lz4
	I1206 19:55:25.228720  115497 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1206 19:55:25.234028  115497 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1206 19:55:25.234061  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1206 19:55:23.280317  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:23.280398  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:23.291959  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:23.780005  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:23.780086  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:23.794371  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:24.257148  115217 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1206 19:55:24.257182  115217 kubeadm.go:1135] stopping kube-system containers ...
	I1206 19:55:24.257196  115217 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1206 19:55:24.257291  115217 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 19:55:24.300759  115217 cri.go:89] found id: ""
	I1206 19:55:24.300832  115217 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1206 19:55:24.319509  115217 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 19:55:24.329215  115217 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 19:55:24.329310  115217 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 19:55:24.338150  115217 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1206 19:55:24.338187  115217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:55:24.490031  115217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:55:25.123737  115217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:55:25.359750  115217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:55:25.550542  115217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:55:25.697003  115217 api_server.go:52] waiting for apiserver process to appear ...
	I1206 19:55:25.697091  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:55:25.713836  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:55:26.231509  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:55:26.730965  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:55:27.231602  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:55:27.731612  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:55:27.763155  115217 api_server.go:72] duration metric: took 2.066152846s to wait for apiserver process to appear ...
	I1206 19:55:27.763181  115217 api_server.go:88] waiting for apiserver healthz status ...
	I1206 19:55:27.763200  115217 api_server.go:253] Checking apiserver healthz at https://192.168.61.33:8443/healthz ...
	I1206 19:55:25.055509  115591 main.go:141] libmachine: (embed-certs-209025) Waiting to get IP...
	I1206 19:55:25.056687  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:25.057138  115591 main.go:141] libmachine: (embed-certs-209025) DBG | unable to find current IP address of domain embed-certs-209025 in network mk-embed-certs-209025
	I1206 19:55:25.057192  115591 main.go:141] libmachine: (embed-certs-209025) DBG | I1206 19:55:25.057100  116938 retry.go:31] will retry after 304.168381ms: waiting for machine to come up
	I1206 19:55:25.363765  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:25.364265  115591 main.go:141] libmachine: (embed-certs-209025) DBG | unable to find current IP address of domain embed-certs-209025 in network mk-embed-certs-209025
	I1206 19:55:25.364404  115591 main.go:141] libmachine: (embed-certs-209025) DBG | I1206 19:55:25.364341  116938 retry.go:31] will retry after 351.729741ms: waiting for machine to come up
	I1206 19:55:25.718184  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:25.718746  115591 main.go:141] libmachine: (embed-certs-209025) DBG | unable to find current IP address of domain embed-certs-209025 in network mk-embed-certs-209025
	I1206 19:55:25.718774  115591 main.go:141] libmachine: (embed-certs-209025) DBG | I1206 19:55:25.718650  116938 retry.go:31] will retry after 340.321802ms: waiting for machine to come up
	I1206 19:55:26.060168  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:26.060796  115591 main.go:141] libmachine: (embed-certs-209025) DBG | unable to find current IP address of domain embed-certs-209025 in network mk-embed-certs-209025
	I1206 19:55:26.060843  115591 main.go:141] libmachine: (embed-certs-209025) DBG | I1206 19:55:26.060725  116938 retry.go:31] will retry after 422.434651ms: waiting for machine to come up
	I1206 19:55:26.484420  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:26.484967  115591 main.go:141] libmachine: (embed-certs-209025) DBG | unable to find current IP address of domain embed-certs-209025 in network mk-embed-certs-209025
	I1206 19:55:26.485003  115591 main.go:141] libmachine: (embed-certs-209025) DBG | I1206 19:55:26.484931  116938 retry.go:31] will retry after 584.854153ms: waiting for machine to come up
	I1206 19:55:27.071766  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:27.072298  115591 main.go:141] libmachine: (embed-certs-209025) DBG | unable to find current IP address of domain embed-certs-209025 in network mk-embed-certs-209025
	I1206 19:55:27.072325  115591 main.go:141] libmachine: (embed-certs-209025) DBG | I1206 19:55:27.072233  116938 retry.go:31] will retry after 710.482528ms: waiting for machine to come up
	I1206 19:55:27.784162  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:27.784660  115591 main.go:141] libmachine: (embed-certs-209025) DBG | unable to find current IP address of domain embed-certs-209025 in network mk-embed-certs-209025
	I1206 19:55:27.784695  115591 main.go:141] libmachine: (embed-certs-209025) DBG | I1206 19:55:27.784560  116938 retry.go:31] will retry after 754.279817ms: waiting for machine to come up
	I1206 19:55:28.540261  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:28.540788  115591 main.go:141] libmachine: (embed-certs-209025) DBG | unable to find current IP address of domain embed-certs-209025 in network mk-embed-certs-209025
	I1206 19:55:28.540818  115591 main.go:141] libmachine: (embed-certs-209025) DBG | I1206 19:55:28.540728  116938 retry.go:31] will retry after 1.359726156s: waiting for machine to come up
	I1206 19:55:27.194696  115497 crio.go:444] Took 1.966010 seconds to copy over tarball
	I1206 19:55:27.194774  115497 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1206 19:55:30.501183  115497 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.306375512s)
	I1206 19:55:30.501222  115497 crio.go:451] Took 3.306493 seconds to extract the tarball
	I1206 19:55:30.501249  115497 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1206 19:55:30.542574  115497 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 19:55:30.587381  115497 crio.go:496] all images are preloaded for cri-o runtime.
	I1206 19:55:30.587405  115497 cache_images.go:84] Images are preloaded, skipping loading
	I1206 19:55:30.587483  115497 ssh_runner.go:195] Run: crio config
	I1206 19:55:30.649117  115497 cni.go:84] Creating CNI manager for ""
	I1206 19:55:30.649140  115497 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 19:55:30.649163  115497 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1206 19:55:30.649191  115497 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.22 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-380424 NodeName:default-k8s-diff-port-380424 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.22"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.22 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 19:55:30.649383  115497 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.22
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-380424"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.22
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.22"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 19:55:30.649487  115497 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-380424 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.22
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-380424 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1206 19:55:30.649561  115497 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1206 19:55:30.659186  115497 binaries.go:44] Found k8s binaries, skipping transfer
	I1206 19:55:30.659297  115497 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 19:55:30.668534  115497 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (387 bytes)
	I1206 19:55:30.684815  115497 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 19:55:30.701801  115497 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2112 bytes)
	I1206 19:55:30.721756  115497 ssh_runner.go:195] Run: grep 192.168.72.22	control-plane.minikube.internal$ /etc/hosts
	I1206 19:55:30.726656  115497 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.22	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 19:55:30.738943  115497 certs.go:56] Setting up /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/default-k8s-diff-port-380424 for IP: 192.168.72.22
	I1206 19:55:30.738981  115497 certs.go:190] acquiring lock for shared ca certs: {Name:mkf8fbf7b590617ef4dc6c3a4acb742ae26f89ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 19:55:30.739159  115497 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.key
	I1206 19:55:30.739219  115497 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.key
	I1206 19:55:30.739322  115497 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/default-k8s-diff-port-380424/client.key
	I1206 19:55:30.739426  115497 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/default-k8s-diff-port-380424/apiserver.key.99d663cb
	I1206 19:55:30.739489  115497 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/default-k8s-diff-port-380424/proxy-client.key
	I1206 19:55:30.739629  115497 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834.pem (1338 bytes)
	W1206 19:55:30.739672  115497 certs.go:433] ignoring /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834_empty.pem, impossibly tiny 0 bytes
	I1206 19:55:30.739689  115497 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 19:55:30.739726  115497 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem (1082 bytes)
	I1206 19:55:30.739762  115497 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem (1123 bytes)
	I1206 19:55:30.739801  115497 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem (1679 bytes)
	I1206 19:55:30.739872  115497 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem (1708 bytes)
	I1206 19:55:30.740532  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/default-k8s-diff-port-380424/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1206 19:55:30.766689  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/default-k8s-diff-port-380424/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 19:55:30.792892  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/default-k8s-diff-port-380424/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 19:55:30.817640  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/default-k8s-diff-port-380424/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 19:55:30.842916  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 19:55:30.868057  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 19:55:30.893993  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 19:55:30.924631  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 19:55:30.953503  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem --> /usr/share/ca-certificates/708342.pem (1708 bytes)
	I1206 19:55:30.980162  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 19:55:31.007247  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834.pem --> /usr/share/ca-certificates/70834.pem (1338 bytes)
	I1206 19:55:31.034274  115497 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 19:55:31.054544  115497 ssh_runner.go:195] Run: openssl version
	I1206 19:55:31.062053  115497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1206 19:55:31.077159  115497 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:55:31.083640  115497 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  6 18:41 /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:55:31.083707  115497 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:55:31.091093  115497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1206 19:55:31.105305  115497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/70834.pem && ln -fs /usr/share/ca-certificates/70834.pem /etc/ssl/certs/70834.pem"
	I1206 19:55:31.117767  115497 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/70834.pem
	I1206 19:55:31.123703  115497 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  6 18:50 /usr/share/ca-certificates/70834.pem
	I1206 19:55:31.123798  115497 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/70834.pem
	I1206 19:55:31.131531  115497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/70834.pem /etc/ssl/certs/51391683.0"
	I1206 19:55:31.142449  115497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/708342.pem && ln -fs /usr/share/ca-certificates/708342.pem /etc/ssl/certs/708342.pem"
	I1206 19:55:31.157311  115497 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/708342.pem
	I1206 19:55:31.163707  115497 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  6 18:50 /usr/share/ca-certificates/708342.pem
	I1206 19:55:31.163783  115497 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/708342.pem
	I1206 19:55:31.170831  115497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/708342.pem /etc/ssl/certs/3ec20f2e.0"
	I1206 19:55:31.183300  115497 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1206 19:55:31.188165  115497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 19:55:31.194562  115497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 19:55:31.201769  115497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 19:55:31.209562  115497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 19:55:31.217346  115497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 19:55:31.225522  115497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 19:55:31.233755  115497 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-380424 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-380424 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.22 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extra
Disks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 19:55:31.233889  115497 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 19:55:31.233952  115497 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 19:55:31.278891  115497 cri.go:89] found id: ""
	I1206 19:55:31.278972  115497 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 19:55:31.291971  115497 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1206 19:55:31.291999  115497 kubeadm.go:636] restartCluster start
	I1206 19:55:31.292070  115497 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 19:55:31.304934  115497 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:31.306156  115497 kubeconfig.go:92] found "default-k8s-diff-port-380424" server: "https://192.168.72.22:8444"
	I1206 19:55:31.308710  115497 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 19:55:31.321910  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:31.321976  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:31.339075  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:31.339096  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:31.339143  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:31.354436  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:32.765826  115217 api_server.go:269] stopped: https://192.168.61.33:8443/healthz: Get "https://192.168.61.33:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1206 19:55:32.765895  115217 api_server.go:253] Checking apiserver healthz at https://192.168.61.33:8443/healthz ...
	I1206 19:55:29.902670  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:29.903123  115591 main.go:141] libmachine: (embed-certs-209025) DBG | unable to find current IP address of domain embed-certs-209025 in network mk-embed-certs-209025
	I1206 19:55:29.903152  115591 main.go:141] libmachine: (embed-certs-209025) DBG | I1206 19:55:29.903081  116938 retry.go:31] will retry after 1.188380941s: waiting for machine to come up
	I1206 19:55:31.092707  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:31.093278  115591 main.go:141] libmachine: (embed-certs-209025) DBG | unable to find current IP address of domain embed-certs-209025 in network mk-embed-certs-209025
	I1206 19:55:31.093311  115591 main.go:141] libmachine: (embed-certs-209025) DBG | I1206 19:55:31.093245  116938 retry.go:31] will retry after 1.854046475s: waiting for machine to come up
	I1206 19:55:32.948423  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:32.948866  115591 main.go:141] libmachine: (embed-certs-209025) DBG | unable to find current IP address of domain embed-certs-209025 in network mk-embed-certs-209025
	I1206 19:55:32.948891  115591 main.go:141] libmachine: (embed-certs-209025) DBG | I1206 19:55:32.948827  116938 retry.go:31] will retry after 2.868825903s: waiting for machine to come up
	I1206 19:55:34.066100  115217 api_server.go:279] https://192.168.61.33:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1206 19:55:34.066146  115217 api_server.go:103] status: https://192.168.61.33:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1206 19:55:34.566904  115217 api_server.go:253] Checking apiserver healthz at https://192.168.61.33:8443/healthz ...
	I1206 19:55:34.573643  115217 api_server.go:279] https://192.168.61.33:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1206 19:55:34.573675  115217 api_server.go:103] status: https://192.168.61.33:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1206 19:55:35.066235  115217 api_server.go:253] Checking apiserver healthz at https://192.168.61.33:8443/healthz ...
	I1206 19:55:35.076927  115217 api_server.go:279] https://192.168.61.33:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1206 19:55:35.076966  115217 api_server.go:103] status: https://192.168.61.33:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1206 19:55:35.566361  115217 api_server.go:253] Checking apiserver healthz at https://192.168.61.33:8443/healthz ...
	I1206 19:55:35.574853  115217 api_server.go:279] https://192.168.61.33:8443/healthz returned 200:
	ok
	I1206 19:55:35.585855  115217 api_server.go:141] control plane version: v1.16.0
	I1206 19:55:35.585895  115217 api_server.go:131] duration metric: took 7.822706447s to wait for apiserver health ...
	I1206 19:55:35.585908  115217 cni.go:84] Creating CNI manager for ""
	I1206 19:55:35.585917  115217 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 19:55:35.587984  115217 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1206 19:55:31.855148  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:31.855275  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:31.867628  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:32.355238  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:32.355330  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:32.368154  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:32.854710  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:32.854820  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:32.870926  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:33.355493  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:33.355586  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:33.371984  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:33.854511  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:33.854604  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:33.871260  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:34.354793  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:34.354897  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:34.371333  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:34.855487  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:34.855575  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:34.868348  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:35.354949  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:35.355026  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:35.367357  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:35.854910  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:35.855003  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:35.871382  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:36.354908  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:36.355047  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:36.371112  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:35.589529  115217 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1206 19:55:35.599454  115217 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1206 19:55:35.616803  115217 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 19:55:35.626793  115217 system_pods.go:59] 7 kube-system pods found
	I1206 19:55:35.626829  115217 system_pods.go:61] "coredns-5644d7b6d9-nrtk9" [447f7434-3f97-4e3f-9451-d9a54bff7ba1] Running
	I1206 19:55:35.626837  115217 system_pods.go:61] "etcd-old-k8s-version-448851" [77c1f822-788f-4f28-8f8e-54278d5d9e10] Running
	I1206 19:55:35.626843  115217 system_pods.go:61] "kube-apiserver-old-k8s-version-448851" [d3cf3d55-8862-4f81-ac61-99b202469859] Running
	I1206 19:55:35.626851  115217 system_pods.go:61] "kube-controller-manager-old-k8s-version-448851" [58ffb9bc-b5a3-4c64-a78f-da0011e6c277] Running
	I1206 19:55:35.626869  115217 system_pods.go:61] "kube-proxy-sw4qv" [6c08ab4a-447b-42e9-a617-ac35d66cf4ea] Running
	I1206 19:55:35.626879  115217 system_pods.go:61] "kube-scheduler-old-k8s-version-448851" [378ead75-3fd6-4cfd-a063-f2afc3a1cd12] Running
	I1206 19:55:35.626886  115217 system_pods.go:61] "storage-provisioner" [cce901c3-37d9-4ae2-ab9c-99bb7fda6d23] Running
	I1206 19:55:35.626901  115217 system_pods.go:74] duration metric: took 10.069819ms to wait for pod list to return data ...
	I1206 19:55:35.626910  115217 node_conditions.go:102] verifying NodePressure condition ...
	I1206 19:55:35.632164  115217 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1206 19:55:35.632240  115217 node_conditions.go:123] node cpu capacity is 2
	I1206 19:55:35.632256  115217 node_conditions.go:105] duration metric: took 5.340532ms to run NodePressure ...
	I1206 19:55:35.632282  115217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:55:35.925990  115217 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1206 19:55:35.935849  115217 retry.go:31] will retry after 256.122518ms: kubelet not initialised
	I1206 19:55:36.197872  115217 retry.go:31] will retry after 337.717759ms: kubelet not initialised
	I1206 19:55:36.541368  115217 retry.go:31] will retry after 784.037462ms: kubelet not initialised
	I1206 19:55:37.331284  115217 retry.go:31] will retry after 921.381118ms: kubelet not initialised
	I1206 19:55:35.819131  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:35.819759  115591 main.go:141] libmachine: (embed-certs-209025) DBG | unable to find current IP address of domain embed-certs-209025 in network mk-embed-certs-209025
	I1206 19:55:35.819793  115591 main.go:141] libmachine: (embed-certs-209025) DBG | I1206 19:55:35.819698  116938 retry.go:31] will retry after 2.281000862s: waiting for machine to come up
	I1206 19:55:38.103281  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:38.103807  115591 main.go:141] libmachine: (embed-certs-209025) DBG | unable to find current IP address of domain embed-certs-209025 in network mk-embed-certs-209025
	I1206 19:55:38.103845  115591 main.go:141] libmachine: (embed-certs-209025) DBG | I1206 19:55:38.103736  116938 retry.go:31] will retry after 3.076134377s: waiting for machine to come up
	I1206 19:55:36.855191  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:36.855309  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:36.872110  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:37.354562  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:37.354682  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:37.370156  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:37.854600  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:37.854726  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:37.870621  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:38.355289  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:38.355391  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:38.368595  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:38.855116  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:38.855218  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:38.868455  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:39.354955  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:39.355048  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:39.368875  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:39.854833  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:39.854928  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:39.866765  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:40.354989  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:40.355106  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:40.367526  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:40.854791  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:40.854873  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:40.866579  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:41.322422  115497 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1206 19:55:41.322456  115497 kubeadm.go:1135] stopping kube-system containers ...
	I1206 19:55:41.322472  115497 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1206 19:55:41.322548  115497 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 19:55:41.360234  115497 cri.go:89] found id: ""
	I1206 19:55:41.360301  115497 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1206 19:55:41.376968  115497 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 19:55:41.387639  115497 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 19:55:41.387694  115497 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 19:55:41.397586  115497 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1206 19:55:41.397617  115497 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:55:38.258758  115217 retry.go:31] will retry after 961.817778ms: kubelet not initialised
	I1206 19:55:39.225505  115217 retry.go:31] will retry after 1.751905914s: kubelet not initialised
	I1206 19:55:40.982344  115217 retry.go:31] will retry after 1.649102014s: kubelet not initialised
	I1206 19:55:42.639410  115217 retry.go:31] will retry after 3.317462401s: kubelet not initialised
	I1206 19:55:41.182443  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:41.182893  115591 main.go:141] libmachine: (embed-certs-209025) DBG | unable to find current IP address of domain embed-certs-209025 in network mk-embed-certs-209025
	I1206 19:55:41.182930  115591 main.go:141] libmachine: (embed-certs-209025) DBG | I1206 19:55:41.182837  116938 retry.go:31] will retry after 4.029797575s: waiting for machine to come up
	I1206 19:55:41.519134  115497 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:55:42.404075  115497 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:55:42.613308  115497 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:55:42.707533  115497 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:55:42.796041  115497 api_server.go:52] waiting for apiserver process to appear ...
	I1206 19:55:42.796139  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:55:42.816782  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:55:43.336582  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:55:43.836183  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:55:44.336879  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:55:44.836718  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:55:45.336249  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:55:45.363947  115497 api_server.go:72] duration metric: took 2.567911355s to wait for apiserver process to appear ...
	I1206 19:55:45.363968  115497 api_server.go:88] waiting for apiserver healthz status ...
	I1206 19:55:45.363984  115497 api_server.go:253] Checking apiserver healthz at https://192.168.72.22:8444/healthz ...
	I1206 19:55:46.486502  115078 start.go:369] acquired machines lock for "no-preload-989559" in 57.98684139s
	I1206 19:55:46.486560  115078 start.go:96] Skipping create...Using existing machine configuration
	I1206 19:55:46.486570  115078 fix.go:54] fixHost starting: 
	I1206 19:55:46.487006  115078 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:55:46.487052  115078 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:55:46.506170  115078 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32893
	I1206 19:55:46.506576  115078 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:55:46.507081  115078 main.go:141] libmachine: Using API Version  1
	I1206 19:55:46.507110  115078 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:55:46.507412  115078 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:55:46.507600  115078 main.go:141] libmachine: (no-preload-989559) Calling .DriverName
	I1206 19:55:46.508110  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetState
	I1206 19:55:46.509817  115078 fix.go:102] recreateIfNeeded on no-preload-989559: state=Stopped err=<nil>
	I1206 19:55:46.509843  115078 main.go:141] libmachine: (no-preload-989559) Calling .DriverName
	W1206 19:55:46.509988  115078 fix.go:128] unexpected machine state, will restart: <nil>
	I1206 19:55:46.512103  115078 out.go:177] * Restarting existing kvm2 VM for "no-preload-989559" ...
	I1206 19:55:45.214823  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.215271  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has current primary IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.215293  115591 main.go:141] libmachine: (embed-certs-209025) Found IP for machine: 192.168.50.164
	I1206 19:55:45.215341  115591 main.go:141] libmachine: (embed-certs-209025) Reserving static IP address...
	I1206 19:55:45.215738  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "embed-certs-209025", mac: "52:54:00:4d:27:5b", ip: "192.168.50.164"} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:45.215772  115591 main.go:141] libmachine: (embed-certs-209025) DBG | skip adding static IP to network mk-embed-certs-209025 - found existing host DHCP lease matching {name: "embed-certs-209025", mac: "52:54:00:4d:27:5b", ip: "192.168.50.164"}
	I1206 19:55:45.215787  115591 main.go:141] libmachine: (embed-certs-209025) Reserved static IP address: 192.168.50.164
	I1206 19:55:45.215805  115591 main.go:141] libmachine: (embed-certs-209025) Waiting for SSH to be available...
	I1206 19:55:45.215821  115591 main.go:141] libmachine: (embed-certs-209025) DBG | Getting to WaitForSSH function...
	I1206 19:55:45.217850  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.218192  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:45.218219  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.218370  115591 main.go:141] libmachine: (embed-certs-209025) DBG | Using SSH client type: external
	I1206 19:55:45.218404  115591 main.go:141] libmachine: (embed-certs-209025) DBG | Using SSH private key: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/embed-certs-209025/id_rsa (-rw-------)
	I1206 19:55:45.218438  115591 main.go:141] libmachine: (embed-certs-209025) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.164 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17740-63652/.minikube/machines/embed-certs-209025/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1206 19:55:45.218452  115591 main.go:141] libmachine: (embed-certs-209025) DBG | About to run SSH command:
	I1206 19:55:45.218475  115591 main.go:141] libmachine: (embed-certs-209025) DBG | exit 0
	I1206 19:55:45.309353  115591 main.go:141] libmachine: (embed-certs-209025) DBG | SSH cmd err, output: <nil>: 
	I1206 19:55:45.309758  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetConfigRaw
	I1206 19:55:45.310547  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetIP
	I1206 19:55:45.313019  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.313334  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:45.313369  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.313638  115591 profile.go:148] Saving config to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/embed-certs-209025/config.json ...
	I1206 19:55:45.313844  115591 machine.go:88] provisioning docker machine ...
	I1206 19:55:45.313870  115591 main.go:141] libmachine: (embed-certs-209025) Calling .DriverName
	I1206 19:55:45.314081  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetMachineName
	I1206 19:55:45.314264  115591 buildroot.go:166] provisioning hostname "embed-certs-209025"
	I1206 19:55:45.314298  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetMachineName
	I1206 19:55:45.314509  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHHostname
	I1206 19:55:45.316952  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.317361  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:45.317395  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.317640  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHPort
	I1206 19:55:45.317821  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 19:55:45.317954  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 19:55:45.318079  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHUsername
	I1206 19:55:45.318235  115591 main.go:141] libmachine: Using SSH client type: native
	I1206 19:55:45.318665  115591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.164 22 <nil> <nil>}
	I1206 19:55:45.318683  115591 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-209025 && echo "embed-certs-209025" | sudo tee /etc/hostname
	I1206 19:55:45.459071  115591 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-209025
	
	I1206 19:55:45.459107  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHHostname
	I1206 19:55:45.461953  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.462334  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:45.462362  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.462592  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHPort
	I1206 19:55:45.462814  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 19:55:45.463010  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 19:55:45.463162  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHUsername
	I1206 19:55:45.463353  115591 main.go:141] libmachine: Using SSH client type: native
	I1206 19:55:45.463887  115591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.164 22 <nil> <nil>}
	I1206 19:55:45.463916  115591 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-209025' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-209025/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-209025' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 19:55:45.597186  115591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1206 19:55:45.597220  115591 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17740-63652/.minikube CaCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17740-63652/.minikube}
	I1206 19:55:45.597253  115591 buildroot.go:174] setting up certificates
	I1206 19:55:45.597270  115591 provision.go:83] configureAuth start
	I1206 19:55:45.597288  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetMachineName
	I1206 19:55:45.597658  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetIP
	I1206 19:55:45.600582  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.600954  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:45.600983  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.601138  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHHostname
	I1206 19:55:45.603355  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.603746  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:45.603774  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.603942  115591 provision.go:138] copyHostCerts
	I1206 19:55:45.604012  115591 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem, removing ...
	I1206 19:55:45.604037  115591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem
	I1206 19:55:45.604113  115591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem (1082 bytes)
	I1206 19:55:45.604227  115591 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem, removing ...
	I1206 19:55:45.604243  115591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem
	I1206 19:55:45.604277  115591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem (1123 bytes)
	I1206 19:55:45.604353  115591 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem, removing ...
	I1206 19:55:45.604363  115591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem
	I1206 19:55:45.604390  115591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem (1679 bytes)
	I1206 19:55:45.604454  115591 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem org=jenkins.embed-certs-209025 san=[192.168.50.164 192.168.50.164 localhost 127.0.0.1 minikube embed-certs-209025]
	I1206 19:55:45.706944  115591 provision.go:172] copyRemoteCerts
	I1206 19:55:45.707028  115591 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 19:55:45.707069  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHHostname
	I1206 19:55:45.709985  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.710357  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:45.710398  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.710530  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHPort
	I1206 19:55:45.710738  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 19:55:45.710917  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHUsername
	I1206 19:55:45.711092  115591 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/embed-certs-209025/id_rsa Username:docker}
	I1206 19:55:45.807035  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 19:55:45.831480  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 19:55:45.855902  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1206 19:55:45.882797  115591 provision.go:86] duration metric: configureAuth took 285.508678ms
	I1206 19:55:45.882831  115591 buildroot.go:189] setting minikube options for container-runtime
	I1206 19:55:45.883074  115591 config.go:182] Loaded profile config "embed-certs-209025": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 19:55:45.883156  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHHostname
	I1206 19:55:45.886130  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.886576  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:45.886611  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.886825  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHPort
	I1206 19:55:45.887026  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 19:55:45.887198  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 19:55:45.887348  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHUsername
	I1206 19:55:45.887570  115591 main.go:141] libmachine: Using SSH client type: native
	I1206 19:55:45.887900  115591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.164 22 <nil> <nil>}
	I1206 19:55:45.887926  115591 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 19:55:46.217654  115591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 19:55:46.217732  115591 machine.go:91] provisioned docker machine in 903.869734ms
	I1206 19:55:46.217748  115591 start.go:300] post-start starting for "embed-certs-209025" (driver="kvm2")
	I1206 19:55:46.217762  115591 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 19:55:46.217788  115591 main.go:141] libmachine: (embed-certs-209025) Calling .DriverName
	I1206 19:55:46.218154  115591 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 19:55:46.218190  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHHostname
	I1206 19:55:46.220968  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:46.221345  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:46.221378  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:46.221557  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHPort
	I1206 19:55:46.221781  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 19:55:46.221951  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHUsername
	I1206 19:55:46.222093  115591 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/embed-certs-209025/id_rsa Username:docker}
	I1206 19:55:46.316289  115591 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 19:55:46.321014  115591 info.go:137] Remote host: Buildroot 2021.02.12
	I1206 19:55:46.321046  115591 filesync.go:126] Scanning /home/jenkins/minikube-integration/17740-63652/.minikube/addons for local assets ...
	I1206 19:55:46.321108  115591 filesync.go:126] Scanning /home/jenkins/minikube-integration/17740-63652/.minikube/files for local assets ...
	I1206 19:55:46.321183  115591 filesync.go:149] local asset: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem -> 708342.pem in /etc/ssl/certs
	I1206 19:55:46.321304  115591 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 19:55:46.331967  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem --> /etc/ssl/certs/708342.pem (1708 bytes)
	I1206 19:55:46.358983  115591 start.go:303] post-start completed in 141.214825ms
	I1206 19:55:46.359014  115591 fix.go:56] fixHost completed within 22.668688221s
	I1206 19:55:46.359037  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHHostname
	I1206 19:55:46.361846  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:46.362179  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:46.362212  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:46.362452  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHPort
	I1206 19:55:46.362704  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 19:55:46.362897  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 19:55:46.363073  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHUsername
	I1206 19:55:46.363310  115591 main.go:141] libmachine: Using SSH client type: native
	I1206 19:55:46.363803  115591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.164 22 <nil> <nil>}
	I1206 19:55:46.363823  115591 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1206 19:55:46.486321  115591 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701892546.422221924
	
	I1206 19:55:46.486350  115591 fix.go:206] guest clock: 1701892546.422221924
	I1206 19:55:46.486361  115591 fix.go:219] Guest: 2023-12-06 19:55:46.422221924 +0000 UTC Remote: 2023-12-06 19:55:46.359018 +0000 UTC m=+296.897065855 (delta=63.203924ms)
	I1206 19:55:46.486385  115591 fix.go:190] guest clock delta is within tolerance: 63.203924ms
	I1206 19:55:46.486391  115591 start.go:83] releasing machines lock for "embed-certs-209025", held for 22.796102432s
	I1206 19:55:46.486420  115591 main.go:141] libmachine: (embed-certs-209025) Calling .DriverName
	I1206 19:55:46.486727  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetIP
	I1206 19:55:46.489589  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:46.489890  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:46.489922  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:46.490079  115591 main.go:141] libmachine: (embed-certs-209025) Calling .DriverName
	I1206 19:55:46.490643  115591 main.go:141] libmachine: (embed-certs-209025) Calling .DriverName
	I1206 19:55:46.490836  115591 main.go:141] libmachine: (embed-certs-209025) Calling .DriverName
	I1206 19:55:46.490924  115591 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 19:55:46.490974  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHHostname
	I1206 19:55:46.491257  115591 ssh_runner.go:195] Run: cat /version.json
	I1206 19:55:46.491281  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHHostname
	I1206 19:55:46.494034  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:46.494326  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:46.494379  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:46.494405  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:46.494704  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:46.494704  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHPort
	I1206 19:55:46.494748  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:46.494900  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 19:55:46.494958  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHPort
	I1206 19:55:46.495019  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHUsername
	I1206 19:55:46.495144  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 19:55:46.495137  115591 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/embed-certs-209025/id_rsa Username:docker}
	I1206 19:55:46.495269  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHUsername
	I1206 19:55:46.495397  115591 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/embed-certs-209025/id_rsa Username:docker}
	I1206 19:55:46.587575  115591 ssh_runner.go:195] Run: systemctl --version
	I1206 19:55:46.614901  115591 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 19:55:46.764133  115591 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 19:55:46.771049  115591 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 19:55:46.771133  115591 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 19:55:46.786157  115591 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1206 19:55:46.786187  115591 start.go:475] detecting cgroup driver to use...
	I1206 19:55:46.786262  115591 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 19:55:46.801158  115591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 19:55:46.812881  115591 docker.go:203] disabling cri-docker service (if available) ...
	I1206 19:55:46.812948  115591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 19:55:46.825139  115591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 19:55:46.838071  115591 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 19:55:46.949823  115591 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 19:55:47.080490  115591 docker.go:219] disabling docker service ...
	I1206 19:55:47.080572  115591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 19:55:47.094773  115591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 19:55:47.107963  115591 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 19:55:47.233536  115591 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 19:55:47.360425  115591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 19:55:47.377453  115591 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 19:55:47.395959  115591 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1206 19:55:47.396026  115591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:55:47.406599  115591 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 19:55:47.406696  115591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:55:47.417082  115591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:55:47.427463  115591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:55:47.438246  115591 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 19:55:47.449910  115591 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 19:55:47.459620  115591 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1206 19:55:47.459675  115591 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1206 19:55:47.476230  115591 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 19:55:47.486777  115591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 19:55:47.597395  115591 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 19:55:47.809260  115591 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 19:55:47.809348  115591 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 19:55:47.815968  115591 start.go:543] Will wait 60s for crictl version
	I1206 19:55:47.816035  115591 ssh_runner.go:195] Run: which crictl
	I1206 19:55:47.820214  115591 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1206 19:55:47.869345  115591 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1206 19:55:47.869435  115591 ssh_runner.go:195] Run: crio --version
	I1206 19:55:47.923602  115591 ssh_runner.go:195] Run: crio --version
	I1206 19:55:47.983187  115591 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1206 19:55:45.963265  115217 retry.go:31] will retry after 4.496095904s: kubelet not initialised
	I1206 19:55:47.984954  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetIP
	I1206 19:55:47.988218  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:47.988742  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:47.988775  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:47.989031  115591 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1206 19:55:47.994471  115591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 19:55:48.008964  115591 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1206 19:55:48.009022  115591 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 19:55:48.056234  115591 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1206 19:55:48.056333  115591 ssh_runner.go:195] Run: which lz4
	I1206 19:55:48.061573  115591 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1206 19:55:48.066119  115591 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1206 19:55:48.066156  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1206 19:55:46.513897  115078 main.go:141] libmachine: (no-preload-989559) Calling .Start
	I1206 19:55:46.514072  115078 main.go:141] libmachine: (no-preload-989559) Ensuring networks are active...
	I1206 19:55:46.514830  115078 main.go:141] libmachine: (no-preload-989559) Ensuring network default is active
	I1206 19:55:46.515153  115078 main.go:141] libmachine: (no-preload-989559) Ensuring network mk-no-preload-989559 is active
	I1206 19:55:46.515533  115078 main.go:141] libmachine: (no-preload-989559) Getting domain xml...
	I1206 19:55:46.516251  115078 main.go:141] libmachine: (no-preload-989559) Creating domain...
	I1206 19:55:47.899847  115078 main.go:141] libmachine: (no-preload-989559) Waiting to get IP...
	I1206 19:55:47.900939  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:55:47.901513  115078 main.go:141] libmachine: (no-preload-989559) DBG | unable to find current IP address of domain no-preload-989559 in network mk-no-preload-989559
	I1206 19:55:47.901634  115078 main.go:141] libmachine: (no-preload-989559) DBG | I1206 19:55:47.901487  117094 retry.go:31] will retry after 244.343929ms: waiting for machine to come up
	I1206 19:55:48.148254  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:55:48.148888  115078 main.go:141] libmachine: (no-preload-989559) DBG | unable to find current IP address of domain no-preload-989559 in network mk-no-preload-989559
	I1206 19:55:48.148927  115078 main.go:141] libmachine: (no-preload-989559) DBG | I1206 19:55:48.148835  117094 retry.go:31] will retry after 258.755356ms: waiting for machine to come up
	I1206 19:55:48.409550  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:55:48.410401  115078 main.go:141] libmachine: (no-preload-989559) DBG | unable to find current IP address of domain no-preload-989559 in network mk-no-preload-989559
	I1206 19:55:48.410427  115078 main.go:141] libmachine: (no-preload-989559) DBG | I1206 19:55:48.410308  117094 retry.go:31] will retry after 321.790541ms: waiting for machine to come up
	I1206 19:55:48.734055  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:55:48.734744  115078 main.go:141] libmachine: (no-preload-989559) DBG | unable to find current IP address of domain no-preload-989559 in network mk-no-preload-989559
	I1206 19:55:48.734768  115078 main.go:141] libmachine: (no-preload-989559) DBG | I1206 19:55:48.734646  117094 retry.go:31] will retry after 464.789653ms: waiting for machine to come up
	I1206 19:55:49.201462  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:55:49.202032  115078 main.go:141] libmachine: (no-preload-989559) DBG | unable to find current IP address of domain no-preload-989559 in network mk-no-preload-989559
	I1206 19:55:49.202065  115078 main.go:141] libmachine: (no-preload-989559) DBG | I1206 19:55:49.201985  117094 retry.go:31] will retry after 541.238407ms: waiting for machine to come up
	I1206 19:55:49.744792  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:55:49.745432  115078 main.go:141] libmachine: (no-preload-989559) DBG | unable to find current IP address of domain no-preload-989559 in network mk-no-preload-989559
	I1206 19:55:49.745461  115078 main.go:141] libmachine: (no-preload-989559) DBG | I1206 19:55:49.745338  117094 retry.go:31] will retry after 791.407194ms: waiting for machine to come up
	I1206 19:55:50.538151  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:55:50.538857  115078 main.go:141] libmachine: (no-preload-989559) DBG | unable to find current IP address of domain no-preload-989559 in network mk-no-preload-989559
	I1206 19:55:50.538883  115078 main.go:141] libmachine: (no-preload-989559) DBG | I1206 19:55:50.538741  117094 retry.go:31] will retry after 1.11510814s: waiting for machine to come up
	I1206 19:55:49.730248  115497 api_server.go:279] https://192.168.72.22:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1206 19:55:49.730287  115497 api_server.go:103] status: https://192.168.72.22:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1206 19:55:49.730318  115497 api_server.go:253] Checking apiserver healthz at https://192.168.72.22:8444/healthz ...
	I1206 19:55:49.788747  115497 api_server.go:279] https://192.168.72.22:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1206 19:55:49.788796  115497 api_server.go:103] status: https://192.168.72.22:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1206 19:55:50.289144  115497 api_server.go:253] Checking apiserver healthz at https://192.168.72.22:8444/healthz ...
	I1206 19:55:50.301437  115497 api_server.go:279] https://192.168.72.22:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1206 19:55:50.301479  115497 api_server.go:103] status: https://192.168.72.22:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1206 19:55:50.789018  115497 api_server.go:253] Checking apiserver healthz at https://192.168.72.22:8444/healthz ...
	I1206 19:55:50.800325  115497 api_server.go:279] https://192.168.72.22:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1206 19:55:50.800374  115497 api_server.go:103] status: https://192.168.72.22:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1206 19:55:51.289899  115497 api_server.go:253] Checking apiserver healthz at https://192.168.72.22:8444/healthz ...
	I1206 19:55:51.297638  115497 api_server.go:279] https://192.168.72.22:8444/healthz returned 200:
	ok
	I1206 19:55:51.310738  115497 api_server.go:141] control plane version: v1.28.4
	I1206 19:55:51.310772  115497 api_server.go:131] duration metric: took 5.946796569s to wait for apiserver health ...
	I1206 19:55:51.310784  115497 cni.go:84] Creating CNI manager for ""
	I1206 19:55:51.310793  115497 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 19:55:51.312719  115497 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1206 19:55:51.314431  115497 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1206 19:55:51.335045  115497 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1206 19:55:51.365598  115497 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 19:55:51.381865  115497 system_pods.go:59] 8 kube-system pods found
	I1206 19:55:51.381914  115497 system_pods.go:61] "coredns-5dd5756b68-4rgxf" [2ae6daa5-430f-4f14-a40c-c29f4757fb06] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 19:55:51.381936  115497 system_pods.go:61] "etcd-default-k8s-diff-port-380424" [895b0cdf-86c9-4b14-a633-4b172471cd2c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1206 19:55:51.381947  115497 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-380424" [ccc042d4-cd4c-4769-adc6-99d792146d72] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1206 19:55:51.381963  115497 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-380424" [b3fbba6f-fa71-489e-81b0-0196bb019273] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 19:55:51.381972  115497 system_pods.go:61] "kube-proxy-9ftnp" [4389fff8-1b22-47a5-af97-35a4e5b6c2b3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1206 19:55:51.381981  115497 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-380424" [b53c666c-cc84-4dd3-b208-35d04113381c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1206 19:55:51.381997  115497 system_pods.go:61] "metrics-server-57f55c9bc5-7bblg" [3a6477d9-cb91-48cb-ba03-8b669db53841] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 19:55:51.382006  115497 system_pods.go:61] "storage-provisioner" [b8f06027-e37c-4c09-b361-4d70af65c991] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 19:55:51.382020  115497 system_pods.go:74] duration metric: took 16.393796ms to wait for pod list to return data ...
	I1206 19:55:51.382041  115497 node_conditions.go:102] verifying NodePressure condition ...
	I1206 19:55:51.389181  115497 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1206 19:55:51.389242  115497 node_conditions.go:123] node cpu capacity is 2
	I1206 19:55:51.389256  115497 node_conditions.go:105] duration metric: took 7.208817ms to run NodePressure ...
	I1206 19:55:51.389285  115497 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:55:50.466490  115217 retry.go:31] will retry after 11.434043258s: kubelet not initialised
	I1206 19:55:49.900059  115591 crio.go:444] Took 1.838540 seconds to copy over tarball
	I1206 19:55:49.900171  115591 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1206 19:55:53.471724  115591 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.571512743s)
	I1206 19:55:53.471757  115591 crio.go:451] Took 3.571659 seconds to extract the tarball
	I1206 19:55:53.471770  115591 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1206 19:55:53.522151  115591 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 19:55:53.578068  115591 crio.go:496] all images are preloaded for cri-o runtime.
	I1206 19:55:53.578167  115591 cache_images.go:84] Images are preloaded, skipping loading
	I1206 19:55:53.578285  115591 ssh_runner.go:195] Run: crio config
	I1206 19:55:53.650688  115591 cni.go:84] Creating CNI manager for ""
	I1206 19:55:53.650715  115591 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 19:55:53.650736  115591 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1206 19:55:53.650762  115591 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.164 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-209025 NodeName:embed-certs-209025 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.164"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.164 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 19:55:53.650938  115591 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.164
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-209025"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.164
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.164"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 19:55:53.651025  115591 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-209025 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.164
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-209025 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1206 19:55:53.651093  115591 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1206 19:55:53.663792  115591 binaries.go:44] Found k8s binaries, skipping transfer
	I1206 19:55:53.663869  115591 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 19:55:53.674126  115591 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1206 19:55:53.692175  115591 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 19:55:53.708842  115591 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1206 19:55:53.726141  115591 ssh_runner.go:195] Run: grep 192.168.50.164	control-plane.minikube.internal$ /etc/hosts
	I1206 19:55:53.730310  115591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.164	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 19:55:53.742456  115591 certs.go:56] Setting up /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/embed-certs-209025 for IP: 192.168.50.164
	I1206 19:55:53.742489  115591 certs.go:190] acquiring lock for shared ca certs: {Name:mkf8fbf7b590617ef4dc6c3a4acb742ae26f89ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 19:55:53.742712  115591 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.key
	I1206 19:55:53.742765  115591 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.key
	I1206 19:55:53.742841  115591 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/embed-certs-209025/client.key
	I1206 19:55:53.742898  115591 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/embed-certs-209025/apiserver.key.d84b90a2
	I1206 19:55:53.742941  115591 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/embed-certs-209025/proxy-client.key
	I1206 19:55:53.743053  115591 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834.pem (1338 bytes)
	W1206 19:55:53.743081  115591 certs.go:433] ignoring /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834_empty.pem, impossibly tiny 0 bytes
	I1206 19:55:53.743096  115591 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 19:55:53.743135  115591 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem (1082 bytes)
	I1206 19:55:53.743172  115591 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem (1123 bytes)
	I1206 19:55:53.743205  115591 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem (1679 bytes)
	I1206 19:55:53.743265  115591 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem (1708 bytes)
	I1206 19:55:53.743932  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/embed-certs-209025/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1206 19:55:53.770792  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/embed-certs-209025/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 19:55:53.795080  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/embed-certs-209025/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 19:55:53.820920  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/embed-certs-209025/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 19:55:53.849068  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 19:55:53.875210  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 19:55:53.900201  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 19:55:53.927067  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 19:55:53.952810  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 19:55:53.979374  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834.pem --> /usr/share/ca-certificates/70834.pem (1338 bytes)
	I1206 19:55:54.005013  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem --> /usr/share/ca-certificates/708342.pem (1708 bytes)
	I1206 19:55:54.028072  115591 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 19:55:54.047087  115591 ssh_runner.go:195] Run: openssl version
	I1206 19:55:54.052949  115591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/708342.pem && ln -fs /usr/share/ca-certificates/708342.pem /etc/ssl/certs/708342.pem"
	I1206 19:55:54.064662  115591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/708342.pem
	I1206 19:55:54.069695  115591 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  6 18:50 /usr/share/ca-certificates/708342.pem
	I1206 19:55:54.069767  115591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/708342.pem
	I1206 19:55:54.076520  115591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/708342.pem /etc/ssl/certs/3ec20f2e.0"
	I1206 19:55:54.088312  115591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1206 19:55:54.100303  115591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:55:54.105718  115591 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  6 18:41 /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:55:54.105787  115591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:55:54.111543  115591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1206 19:55:54.124094  115591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/70834.pem && ln -fs /usr/share/ca-certificates/70834.pem /etc/ssl/certs/70834.pem"
	I1206 19:55:54.137418  115591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/70834.pem
	I1206 19:55:54.142536  115591 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  6 18:50 /usr/share/ca-certificates/70834.pem
	I1206 19:55:54.142611  115591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/70834.pem
	I1206 19:55:54.148497  115591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/70834.pem /etc/ssl/certs/51391683.0"
	I1206 19:55:54.160909  115591 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1206 19:55:54.165739  115591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 19:55:54.171884  115591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 19:55:54.179765  115591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 19:55:54.187615  115591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 19:55:54.195156  115591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 19:55:54.203228  115591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 19:55:54.210119  115591 kubeadm.go:404] StartCluster: {Name:embed-certs-209025 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-209025 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.164 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 19:55:54.210251  115591 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 19:55:54.210308  115591 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 19:55:54.258252  115591 cri.go:89] found id: ""
	I1206 19:55:54.258347  115591 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 19:55:54.270699  115591 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1206 19:55:54.270724  115591 kubeadm.go:636] restartCluster start
	I1206 19:55:54.270785  115591 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 19:55:54.281833  115591 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:54.282964  115591 kubeconfig.go:92] found "embed-certs-209025" server: "https://192.168.50.164:8443"
	I1206 19:55:54.285394  115591 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 19:55:54.296437  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:55:54.296545  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:54.309685  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:54.309707  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:55:54.309774  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:54.322265  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:51.655238  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:55:51.655732  115078 main.go:141] libmachine: (no-preload-989559) DBG | unable to find current IP address of domain no-preload-989559 in network mk-no-preload-989559
	I1206 19:55:51.655776  115078 main.go:141] libmachine: (no-preload-989559) DBG | I1206 19:55:51.655642  117094 retry.go:31] will retry after 958.384892ms: waiting for machine to come up
	I1206 19:55:52.616005  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:55:52.616540  115078 main.go:141] libmachine: (no-preload-989559) DBG | unable to find current IP address of domain no-preload-989559 in network mk-no-preload-989559
	I1206 19:55:52.616583  115078 main.go:141] libmachine: (no-preload-989559) DBG | I1206 19:55:52.616471  117094 retry.go:31] will retry after 1.537571193s: waiting for machine to come up
	I1206 19:55:54.155949  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:55:54.156397  115078 main.go:141] libmachine: (no-preload-989559) DBG | unable to find current IP address of domain no-preload-989559 in network mk-no-preload-989559
	I1206 19:55:54.156429  115078 main.go:141] libmachine: (no-preload-989559) DBG | I1206 19:55:54.156344  117094 retry.go:31] will retry after 2.030397746s: waiting for machine to come up
	I1206 19:55:51.771991  115497 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1206 19:55:51.786960  115497 kubeadm.go:787] kubelet initialised
	I1206 19:55:51.787056  115497 kubeadm.go:788] duration metric: took 14.962005ms waiting for restarted kubelet to initialise ...
	I1206 19:55:51.787080  115497 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 19:55:51.799090  115497 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-4rgxf" in "kube-system" namespace to be "Ready" ...
	I1206 19:55:53.845695  115497 pod_ready.go:102] pod "coredns-5dd5756b68-4rgxf" in "kube-system" namespace has status "Ready":"False"
	I1206 19:55:55.850483  115497 pod_ready.go:102] pod "coredns-5dd5756b68-4rgxf" in "kube-system" namespace has status "Ready":"False"
	I1206 19:55:54.823014  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:55:54.823105  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:54.835793  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:55.323393  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:55:55.323491  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:55.337041  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:55.823330  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:55:55.823437  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:55.839489  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:56.323250  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:55:56.323356  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:56.340029  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:56.822585  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:55:56.822700  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:56.835752  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:57.323326  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:55:57.323413  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:57.339916  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:57.823386  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:55:57.823478  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:57.840112  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:58.322441  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:55:58.322557  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:58.335485  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:58.822575  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:55:58.822695  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:58.839495  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:59.323053  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:55:59.323129  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:59.336117  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:56.188549  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:55:56.189073  115078 main.go:141] libmachine: (no-preload-989559) DBG | unable to find current IP address of domain no-preload-989559 in network mk-no-preload-989559
	I1206 19:55:56.189105  115078 main.go:141] libmachine: (no-preload-989559) DBG | I1206 19:55:56.189026  117094 retry.go:31] will retry after 2.455387871s: waiting for machine to come up
	I1206 19:55:58.646361  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:55:58.646772  115078 main.go:141] libmachine: (no-preload-989559) DBG | unable to find current IP address of domain no-preload-989559 in network mk-no-preload-989559
	I1206 19:55:58.646804  115078 main.go:141] libmachine: (no-preload-989559) DBG | I1206 19:55:58.646710  117094 retry.go:31] will retry after 3.286246406s: waiting for machine to come up
	I1206 19:55:57.344443  115497 pod_ready.go:92] pod "coredns-5dd5756b68-4rgxf" in "kube-system" namespace has status "Ready":"True"
	I1206 19:55:57.344478  115497 pod_ready.go:81] duration metric: took 5.545343389s waiting for pod "coredns-5dd5756b68-4rgxf" in "kube-system" namespace to be "Ready" ...
	I1206 19:55:57.344492  115497 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 19:55:59.363596  115497 pod_ready.go:102] pod "etcd-default-k8s-diff-port-380424" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:01.363703  115497 pod_ready.go:102] pod "etcd-default-k8s-diff-port-380424" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:01.907869  115217 retry.go:31] will retry after 21.572905296s: kubelet not initialised
	I1206 19:55:59.823000  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:55:59.823148  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:59.836153  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:00.322534  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:56:00.322617  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:00.340369  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:00.822851  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:56:00.822947  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:00.836512  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:01.323083  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:56:01.323161  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:01.337092  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:01.822623  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:56:01.822761  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:01.836428  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:02.323125  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:56:02.323213  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:02.336617  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:02.823198  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:56:02.823287  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:02.835923  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:03.322426  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:56:03.322527  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:03.336495  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:03.822711  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:56:03.822803  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:03.836624  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:04.297216  115591 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1206 19:56:04.297278  115591 kubeadm.go:1135] stopping kube-system containers ...
	I1206 19:56:04.297295  115591 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1206 19:56:04.297393  115591 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 19:56:04.343930  115591 cri.go:89] found id: ""
	I1206 19:56:04.344015  115591 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1206 19:56:04.364785  115591 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 19:56:04.376251  115591 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 19:56:04.376320  115591 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 19:56:04.387749  115591 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1206 19:56:04.387779  115591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:56:04.511034  115591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:56:01.934204  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:01.934775  115078 main.go:141] libmachine: (no-preload-989559) DBG | unable to find current IP address of domain no-preload-989559 in network mk-no-preload-989559
	I1206 19:56:01.934798  115078 main.go:141] libmachine: (no-preload-989559) DBG | I1206 19:56:01.934724  117094 retry.go:31] will retry after 2.967009815s: waiting for machine to come up
	I1206 19:56:04.903296  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:04.903725  115078 main.go:141] libmachine: (no-preload-989559) DBG | unable to find current IP address of domain no-preload-989559 in network mk-no-preload-989559
	I1206 19:56:04.903747  115078 main.go:141] libmachine: (no-preload-989559) DBG | I1206 19:56:04.903692  117094 retry.go:31] will retry after 4.817836653s: waiting for machine to come up
	I1206 19:56:03.862804  115497 pod_ready.go:102] pod "etcd-default-k8s-diff-port-380424" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:04.373174  115497 pod_ready.go:92] pod "etcd-default-k8s-diff-port-380424" in "kube-system" namespace has status "Ready":"True"
	I1206 19:56:04.373209  115497 pod_ready.go:81] duration metric: took 7.028708302s waiting for pod "etcd-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:04.373222  115497 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:04.383300  115497 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-380424" in "kube-system" namespace has status "Ready":"True"
	I1206 19:56:04.383324  115497 pod_ready.go:81] duration metric: took 10.094356ms waiting for pod "kube-apiserver-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:04.383333  115497 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:04.390225  115497 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-380424" in "kube-system" namespace has status "Ready":"True"
	I1206 19:56:04.390254  115497 pod_ready.go:81] duration metric: took 6.909695ms waiting for pod "kube-controller-manager-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:04.390267  115497 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9ftnp" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:04.396713  115497 pod_ready.go:92] pod "kube-proxy-9ftnp" in "kube-system" namespace has status "Ready":"True"
	I1206 19:56:04.396753  115497 pod_ready.go:81] duration metric: took 6.477432ms waiting for pod "kube-proxy-9ftnp" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:04.396766  115497 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:04.407015  115497 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-380424" in "kube-system" namespace has status "Ready":"True"
	I1206 19:56:04.407042  115497 pod_ready.go:81] duration metric: took 10.266604ms waiting for pod "kube-scheduler-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:04.407056  115497 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:05.819075  115591 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.307992443s)
	I1206 19:56:05.819111  115591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:56:06.024824  115591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:56:06.120865  115591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:56:06.207869  115591 api_server.go:52] waiting for apiserver process to appear ...
	I1206 19:56:06.207959  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:06.221306  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:06.734164  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:07.234302  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:07.734130  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:08.233726  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:08.734073  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:08.762848  115591 api_server.go:72] duration metric: took 2.554978073s to wait for apiserver process to appear ...
	I1206 19:56:08.762881  115591 api_server.go:88] waiting for apiserver healthz status ...
	I1206 19:56:08.762903  115591 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8443/healthz ...
	I1206 19:56:09.723600  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:09.724078  115078 main.go:141] libmachine: (no-preload-989559) Found IP for machine: 192.168.39.5
	I1206 19:56:09.724107  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has current primary IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:09.724114  115078 main.go:141] libmachine: (no-preload-989559) Reserving static IP address...
	I1206 19:56:09.724466  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "no-preload-989559", mac: "52:54:00:1c:4b:ce", ip: "192.168.39.5"} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:09.724509  115078 main.go:141] libmachine: (no-preload-989559) DBG | skip adding static IP to network mk-no-preload-989559 - found existing host DHCP lease matching {name: "no-preload-989559", mac: "52:54:00:1c:4b:ce", ip: "192.168.39.5"}
	I1206 19:56:09.724526  115078 main.go:141] libmachine: (no-preload-989559) Reserved static IP address: 192.168.39.5
	I1206 19:56:09.724536  115078 main.go:141] libmachine: (no-preload-989559) Waiting for SSH to be available...
	I1206 19:56:09.724546  115078 main.go:141] libmachine: (no-preload-989559) DBG | Getting to WaitForSSH function...
	I1206 19:56:09.726831  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:09.727117  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:09.727149  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:09.727248  115078 main.go:141] libmachine: (no-preload-989559) DBG | Using SSH client type: external
	I1206 19:56:09.727277  115078 main.go:141] libmachine: (no-preload-989559) DBG | Using SSH private key: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/no-preload-989559/id_rsa (-rw-------)
	I1206 19:56:09.727306  115078 main.go:141] libmachine: (no-preload-989559) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.5 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17740-63652/.minikube/machines/no-preload-989559/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1206 19:56:09.727317  115078 main.go:141] libmachine: (no-preload-989559) DBG | About to run SSH command:
	I1206 19:56:09.727361  115078 main.go:141] libmachine: (no-preload-989559) DBG | exit 0
	I1206 19:56:09.866040  115078 main.go:141] libmachine: (no-preload-989559) DBG | SSH cmd err, output: <nil>: 
	I1206 19:56:09.866443  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetConfigRaw
	I1206 19:56:09.867193  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetIP
	I1206 19:56:09.869892  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:09.870335  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:09.870374  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:09.870612  115078 profile.go:148] Saving config to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/no-preload-989559/config.json ...
	I1206 19:56:09.870870  115078 machine.go:88] provisioning docker machine ...
	I1206 19:56:09.870895  115078 main.go:141] libmachine: (no-preload-989559) Calling .DriverName
	I1206 19:56:09.871120  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetMachineName
	I1206 19:56:09.871299  115078 buildroot.go:166] provisioning hostname "no-preload-989559"
	I1206 19:56:09.871320  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetMachineName
	I1206 19:56:09.871471  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHHostname
	I1206 19:56:09.874146  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:09.874514  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:09.874554  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:09.874741  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHPort
	I1206 19:56:09.874943  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:09.875114  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:09.875258  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHUsername
	I1206 19:56:09.875412  115078 main.go:141] libmachine: Using SSH client type: native
	I1206 19:56:09.875921  115078 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I1206 19:56:09.875942  115078 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-989559 && echo "no-preload-989559" | sudo tee /etc/hostname
	I1206 19:56:10.017205  115078 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-989559
	
	I1206 19:56:10.017259  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHHostname
	I1206 19:56:10.020397  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:10.020843  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:10.020873  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:10.021040  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHPort
	I1206 19:56:10.021287  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:10.021450  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:10.021578  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHUsername
	I1206 19:56:10.021773  115078 main.go:141] libmachine: Using SSH client type: native
	I1206 19:56:10.022227  115078 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I1206 19:56:10.022255  115078 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-989559' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-989559/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-989559' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 19:56:10.160934  115078 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1206 19:56:10.161020  115078 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17740-63652/.minikube CaCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17740-63652/.minikube}
	I1206 19:56:10.161056  115078 buildroot.go:174] setting up certificates
	I1206 19:56:10.161072  115078 provision.go:83] configureAuth start
	I1206 19:56:10.161086  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetMachineName
	I1206 19:56:10.161464  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetIP
	I1206 19:56:10.164558  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:10.164956  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:10.165007  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:10.165246  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHHostname
	I1206 19:56:10.167911  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:10.168352  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:10.168412  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:10.168529  115078 provision.go:138] copyHostCerts
	I1206 19:56:10.168589  115078 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem, removing ...
	I1206 19:56:10.168612  115078 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem
	I1206 19:56:10.168675  115078 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem (1082 bytes)
	I1206 19:56:10.168796  115078 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem, removing ...
	I1206 19:56:10.168811  115078 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem
	I1206 19:56:10.168844  115078 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem (1123 bytes)
	I1206 19:56:10.168923  115078 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem, removing ...
	I1206 19:56:10.168962  115078 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem
	I1206 19:56:10.168990  115078 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem (1679 bytes)
	I1206 19:56:10.169062  115078 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem org=jenkins.no-preload-989559 san=[192.168.39.5 192.168.39.5 localhost 127.0.0.1 minikube no-preload-989559]
	I1206 19:56:10.266595  115078 provision.go:172] copyRemoteCerts
	I1206 19:56:10.266665  115078 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 19:56:10.266693  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHHostname
	I1206 19:56:10.269388  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:10.269786  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:10.269813  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:10.269987  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHPort
	I1206 19:56:10.270226  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:10.270390  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHUsername
	I1206 19:56:10.270536  115078 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/no-preload-989559/id_rsa Username:docker}
	I1206 19:56:10.362922  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 19:56:10.388165  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1206 19:56:10.412473  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 19:56:10.436804  115078 provision.go:86] duration metric: configureAuth took 275.714086ms
	I1206 19:56:10.436840  115078 buildroot.go:189] setting minikube options for container-runtime
	I1206 19:56:10.437076  115078 config.go:182] Loaded profile config "no-preload-989559": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.1
	I1206 19:56:10.437156  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHHostname
	I1206 19:56:10.439999  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:10.440419  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:10.440461  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:10.440567  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHPort
	I1206 19:56:10.440813  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:10.441006  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:10.441213  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHUsername
	I1206 19:56:10.441393  115078 main.go:141] libmachine: Using SSH client type: native
	I1206 19:56:10.441827  115078 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I1206 19:56:10.441844  115078 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 19:56:10.766695  115078 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 19:56:10.766726  115078 machine.go:91] provisioned docker machine in 895.840237ms
	I1206 19:56:10.766739  115078 start.go:300] post-start starting for "no-preload-989559" (driver="kvm2")
	I1206 19:56:10.766759  115078 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 19:56:10.766780  115078 main.go:141] libmachine: (no-preload-989559) Calling .DriverName
	I1206 19:56:10.767134  115078 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 19:56:10.767175  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHHostname
	I1206 19:56:10.770309  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:10.770704  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:10.770733  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:10.770881  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHPort
	I1206 19:56:10.771110  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:10.771247  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHUsername
	I1206 19:56:10.771414  115078 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/no-preload-989559/id_rsa Username:docker}
	I1206 19:56:10.869486  115078 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 19:56:10.874406  115078 info.go:137] Remote host: Buildroot 2021.02.12
	I1206 19:56:10.874433  115078 filesync.go:126] Scanning /home/jenkins/minikube-integration/17740-63652/.minikube/addons for local assets ...
	I1206 19:56:10.874502  115078 filesync.go:126] Scanning /home/jenkins/minikube-integration/17740-63652/.minikube/files for local assets ...
	I1206 19:56:10.874584  115078 filesync.go:149] local asset: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem -> 708342.pem in /etc/ssl/certs
	I1206 19:56:10.874684  115078 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 19:56:10.885837  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem --> /etc/ssl/certs/708342.pem (1708 bytes)
	I1206 19:56:10.910379  115078 start.go:303] post-start completed in 143.622076ms
	I1206 19:56:10.910406  115078 fix.go:56] fixHost completed within 24.423837205s
	I1206 19:56:10.910430  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHHostname
	I1206 19:56:10.913414  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:10.913887  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:10.913924  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:10.914062  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHPort
	I1206 19:56:10.914276  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:10.914430  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:10.914575  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHUsername
	I1206 19:56:10.914741  115078 main.go:141] libmachine: Using SSH client type: native
	I1206 19:56:10.915078  115078 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I1206 19:56:10.915096  115078 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1206 19:56:06.672320  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:09.170277  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:11.173418  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:11.046393  115078 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701892571.030057611
	
	I1206 19:56:11.046418  115078 fix.go:206] guest clock: 1701892571.030057611
	I1206 19:56:11.046427  115078 fix.go:219] Guest: 2023-12-06 19:56:11.030057611 +0000 UTC Remote: 2023-12-06 19:56:10.910410702 +0000 UTC m=+364.968252500 (delta=119.646909ms)
	I1206 19:56:11.046452  115078 fix.go:190] guest clock delta is within tolerance: 119.646909ms
	I1206 19:56:11.046460  115078 start.go:83] releasing machines lock for "no-preload-989559", held for 24.559924375s
	I1206 19:56:11.046485  115078 main.go:141] libmachine: (no-preload-989559) Calling .DriverName
	I1206 19:56:11.046751  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetIP
	I1206 19:56:11.049522  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:11.049918  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:11.049958  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:11.050160  115078 main.go:141] libmachine: (no-preload-989559) Calling .DriverName
	I1206 19:56:11.050715  115078 main.go:141] libmachine: (no-preload-989559) Calling .DriverName
	I1206 19:56:11.050932  115078 main.go:141] libmachine: (no-preload-989559) Calling .DriverName
	I1206 19:56:11.051010  115078 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 19:56:11.051063  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHHostname
	I1206 19:56:11.051201  115078 ssh_runner.go:195] Run: cat /version.json
	I1206 19:56:11.051234  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHHostname
	I1206 19:56:11.054142  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:11.054342  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:11.054556  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:11.054587  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:11.054711  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHPort
	I1206 19:56:11.054925  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:11.054930  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:11.054950  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:11.055054  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHPort
	I1206 19:56:11.055147  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHUsername
	I1206 19:56:11.055316  115078 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/no-preload-989559/id_rsa Username:docker}
	I1206 19:56:11.055338  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:11.055483  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHUsername
	I1206 19:56:11.055605  115078 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/no-preload-989559/id_rsa Username:docker}
	I1206 19:56:11.180256  115078 ssh_runner.go:195] Run: systemctl --version
	I1206 19:56:11.186702  115078 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 19:56:11.339386  115078 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 19:56:11.346262  115078 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 19:56:11.346364  115078 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 19:56:11.362865  115078 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1206 19:56:11.362902  115078 start.go:475] detecting cgroup driver to use...
	I1206 19:56:11.362988  115078 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 19:56:11.383636  115078 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 19:56:11.397157  115078 docker.go:203] disabling cri-docker service (if available) ...
	I1206 19:56:11.397264  115078 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 19:56:11.411338  115078 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 19:56:11.425762  115078 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 19:56:11.560730  115078 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 19:56:11.708633  115078 docker.go:219] disabling docker service ...
	I1206 19:56:11.708713  115078 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 19:56:11.723172  115078 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 19:56:11.737032  115078 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 19:56:11.851037  115078 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 19:56:11.969321  115078 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 19:56:11.982745  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 19:56:12.003130  115078 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1206 19:56:12.003215  115078 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:56:12.013345  115078 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 19:56:12.013428  115078 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:56:12.023765  115078 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:56:12.034114  115078 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:56:12.044159  115078 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 19:56:12.054135  115078 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 19:56:12.062781  115078 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1206 19:56:12.062861  115078 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1206 19:56:12.076322  115078 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 19:56:12.085924  115078 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 19:56:12.216360  115078 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 19:56:12.409482  115078 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 19:56:12.409550  115078 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 19:56:12.417063  115078 start.go:543] Will wait 60s for crictl version
	I1206 19:56:12.417135  115078 ssh_runner.go:195] Run: which crictl
	I1206 19:56:12.422177  115078 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1206 19:56:12.474340  115078 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1206 19:56:12.474449  115078 ssh_runner.go:195] Run: crio --version
	I1206 19:56:12.538091  115078 ssh_runner.go:195] Run: crio --version
	I1206 19:56:12.604444  115078 out.go:177] * Preparing Kubernetes v1.29.0-rc.1 on CRI-O 1.24.1 ...
	I1206 19:56:12.144887  115591 api_server.go:279] https://192.168.50.164:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1206 19:56:12.144921  115591 api_server.go:103] status: https://192.168.50.164:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1206 19:56:12.144936  115591 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8443/healthz ...
	I1206 19:56:12.179318  115591 api_server.go:279] https://192.168.50.164:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1206 19:56:12.179366  115591 api_server.go:103] status: https://192.168.50.164:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1206 19:56:12.679803  115591 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8443/healthz ...
	I1206 19:56:12.694412  115591 api_server.go:279] https://192.168.50.164:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1206 19:56:12.694449  115591 api_server.go:103] status: https://192.168.50.164:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1206 19:56:13.179503  115591 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8443/healthz ...
	I1206 19:56:13.193118  115591 api_server.go:279] https://192.168.50.164:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1206 19:56:13.193161  115591 api_server.go:103] status: https://192.168.50.164:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1206 19:56:13.679759  115591 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8443/healthz ...
	I1206 19:56:13.685603  115591 api_server.go:279] https://192.168.50.164:8443/healthz returned 200:
	ok
	I1206 19:56:13.694792  115591 api_server.go:141] control plane version: v1.28.4
	I1206 19:56:13.694831  115591 api_server.go:131] duration metric: took 4.931941572s to wait for apiserver health ...
	I1206 19:56:13.694843  115591 cni.go:84] Creating CNI manager for ""
	I1206 19:56:13.694852  115591 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 19:56:13.697042  115591 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1206 19:56:13.698653  115591 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1206 19:56:13.712991  115591 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1206 19:56:13.734001  115591 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 19:56:13.761962  115591 system_pods.go:59] 8 kube-system pods found
	I1206 19:56:13.762001  115591 system_pods.go:61] "coredns-5dd5756b68-cpst4" [e7d8324e-8468-470c-b532-1f09ee805bab] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 19:56:13.762022  115591 system_pods.go:61] "etcd-embed-certs-209025" [eeb81149-8e43-4efe-b977-e8f84c7a7c57] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1206 19:56:13.762032  115591 system_pods.go:61] "kube-apiserver-embed-certs-209025" [b64e228d-4921-4e35-b80c-343f8519076e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1206 19:56:13.762041  115591 system_pods.go:61] "kube-controller-manager-embed-certs-209025" [2206d849-0724-42c9-b5c4-4d84c3cafce4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 19:56:13.762053  115591 system_pods.go:61] "kube-proxy-pt8nj" [b7cffe6a-4401-40e0-8056-68452e15b57c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1206 19:56:13.762068  115591 system_pods.go:61] "kube-scheduler-embed-certs-209025" [88ae7a94-a1bc-463a-9253-5f308ec1755e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1206 19:56:13.762077  115591 system_pods.go:61] "metrics-server-57f55c9bc5-dr9k8" [0dbe18a4-d30d-4882-b188-b0d1f1b1d711] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 19:56:13.762092  115591 system_pods.go:61] "storage-provisioner" [afebf144-9062-4b43-a491-9eecd5ab6c10] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 19:56:13.762109  115591 system_pods.go:74] duration metric: took 28.078588ms to wait for pod list to return data ...
	I1206 19:56:13.762120  115591 node_conditions.go:102] verifying NodePressure condition ...
	I1206 19:56:13.773614  115591 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1206 19:56:13.773646  115591 node_conditions.go:123] node cpu capacity is 2
	I1206 19:56:13.773657  115591 node_conditions.go:105] duration metric: took 11.528993ms to run NodePressure ...
	I1206 19:56:13.773678  115591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:56:14.157761  115591 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1206 19:56:14.169588  115591 kubeadm.go:787] kubelet initialised
	I1206 19:56:14.169632  115591 kubeadm.go:788] duration metric: took 11.756226ms waiting for restarted kubelet to initialise ...
	I1206 19:56:14.169644  115591 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 19:56:14.186031  115591 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-cpst4" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:14.211717  115591 pod_ready.go:97] node "embed-certs-209025" hosting pod "coredns-5dd5756b68-cpst4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-209025" has status "Ready":"False"
	I1206 19:56:14.211747  115591 pod_ready.go:81] duration metric: took 25.681607ms waiting for pod "coredns-5dd5756b68-cpst4" in "kube-system" namespace to be "Ready" ...
	E1206 19:56:14.211759  115591 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-209025" hosting pod "coredns-5dd5756b68-cpst4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-209025" has status "Ready":"False"
	I1206 19:56:14.211769  115591 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:14.219369  115591 pod_ready.go:97] node "embed-certs-209025" hosting pod "etcd-embed-certs-209025" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-209025" has status "Ready":"False"
	I1206 19:56:14.219396  115591 pod_ready.go:81] duration metric: took 7.594898ms waiting for pod "etcd-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	E1206 19:56:14.219408  115591 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-209025" hosting pod "etcd-embed-certs-209025" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-209025" has status "Ready":"False"
	I1206 19:56:14.219425  115591 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:14.233417  115591 pod_ready.go:97] node "embed-certs-209025" hosting pod "kube-apiserver-embed-certs-209025" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-209025" has status "Ready":"False"
	I1206 19:56:14.233513  115591 pod_ready.go:81] duration metric: took 14.073312ms waiting for pod "kube-apiserver-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	E1206 19:56:14.233535  115591 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-209025" hosting pod "kube-apiserver-embed-certs-209025" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-209025" has status "Ready":"False"
	I1206 19:56:14.233546  115591 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:14.244480  115591 pod_ready.go:97] node "embed-certs-209025" hosting pod "kube-controller-manager-embed-certs-209025" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-209025" has status "Ready":"False"
	I1206 19:56:14.244516  115591 pod_ready.go:81] duration metric: took 10.958431ms waiting for pod "kube-controller-manager-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	E1206 19:56:14.244530  115591 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-209025" hosting pod "kube-controller-manager-embed-certs-209025" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-209025" has status "Ready":"False"
	I1206 19:56:14.244537  115591 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-pt8nj" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:12.606102  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetIP
	I1206 19:56:12.609040  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:12.609395  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:12.609436  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:12.609665  115078 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1206 19:56:12.615279  115078 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 19:56:12.629571  115078 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime crio
	I1206 19:56:12.629641  115078 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 19:56:12.674728  115078 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.1". assuming images are not preloaded.
	I1206 19:56:12.674763  115078 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.1 registry.k8s.io/kube-controller-manager:v1.29.0-rc.1 registry.k8s.io/kube-scheduler:v1.29.0-rc.1 registry.k8s.io/kube-proxy:v1.29.0-rc.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1206 19:56:12.674870  115078 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 19:56:12.674886  115078 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1206 19:56:12.674910  115078 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I1206 19:56:12.674923  115078 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1206 19:56:12.674965  115078 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1206 19:56:12.674885  115078 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1206 19:56:12.674998  115078 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I1206 19:56:12.674889  115078 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1206 19:56:12.676510  115078 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 19:56:12.676539  115078 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1206 19:56:12.676563  115078 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1206 19:56:12.676576  115078 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1206 19:56:12.676511  115078 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I1206 19:56:12.676599  115078 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I1206 19:56:12.676624  115078 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1206 19:56:12.676642  115078 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1206 19:56:12.862606  115078 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1206 19:56:12.882993  115078 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I1206 19:56:12.884387  115078 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I1206 19:56:12.900149  115078 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 19:56:12.909389  115078 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1206 19:56:12.916391  115078 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1206 19:56:12.924669  115078 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1206 19:56:12.946885  115078 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1206 19:56:13.028628  115078 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I1206 19:56:13.028685  115078 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I1206 19:56:13.028741  115078 ssh_runner.go:195] Run: which crictl
	I1206 19:56:13.095076  115078 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I1206 19:56:13.095139  115078 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I1206 19:56:13.095289  115078 ssh_runner.go:195] Run: which crictl
	I1206 19:56:13.136956  115078 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.1" does not exist at hash "b953575c86991f5317a5f4b7e3f43c8a43d771ca400d96ba473f43c9a1ab3542" in container runtime
	I1206 19:56:13.137003  115078 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1206 19:56:13.137074  115078 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 19:56:13.137130  115078 ssh_runner.go:195] Run: which crictl
	I1206 19:56:13.137005  115078 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1206 19:56:13.137268  115078 ssh_runner.go:195] Run: which crictl
	I1206 19:56:13.146913  115078 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.1" does not exist at hash "b7918a1eaa37ee8f677a10278009aa059854630785fbb7306140641494aeec09" in container runtime
	I1206 19:56:13.146970  115078 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1206 19:56:13.147024  115078 ssh_runner.go:195] Run: which crictl
	I1206 19:56:13.159866  115078 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.1" does not exist at hash "86e0ea23640eb90027102f4e4ff188211c290910bcd9af7402fd429d43d281ff" in container runtime
	I1206 19:56:13.159913  115078 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1206 19:56:13.159963  115078 ssh_runner.go:195] Run: which crictl
	I1206 19:56:13.162288  115078 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I1206 19:56:13.162330  115078 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.1" does not exist at hash "5c4a644510d6f168869cf0620dc09abd02a6fe054538f92a12c7fd7365ffb956" in container runtime
	I1206 19:56:13.162375  115078 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I1206 19:56:13.162378  115078 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1206 19:56:13.162399  115078 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 19:56:13.162407  115078 ssh_runner.go:195] Run: which crictl
	I1206 19:56:13.162523  115078 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1206 19:56:13.162523  115078 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1206 19:56:13.165637  115078 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1206 19:56:13.319155  115078 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I1206 19:56:13.319253  115078 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1206 19:56:13.319274  115078 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I1206 19:56:13.319300  115078 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1206 19:56:13.319371  115078 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.1
	I1206 19:56:13.319394  115078 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1206 19:56:13.319405  115078 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I1206 19:56:13.319423  115078 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1
	I1206 19:56:13.319472  115078 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I1206 19:56:13.319495  115078 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.1
	I1206 19:56:13.319545  115078 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.1
	I1206 19:56:13.319621  115078 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1
	I1206 19:56:13.319546  115078 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1
	I1206 19:56:13.376009  115078 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1 (exists)
	I1206 19:56:13.376036  115078 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1
	I1206 19:56:13.376100  115078 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1
	I1206 19:56:13.376145  115078 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I1206 19:56:13.376179  115078 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1206 19:56:13.376217  115078 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I1206 19:56:13.376273  115078 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1 (exists)
	I1206 19:56:13.376302  115078 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1 (exists)
	I1206 19:56:13.376329  115078 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.1
	I1206 19:56:13.376423  115078 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1
	I1206 19:56:15.530421  115078 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1: (2.153965348s)
	I1206 19:56:15.530466  115078 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1 (exists)
	I1206 19:56:15.530502  115078 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1: (2.154372843s)
	I1206 19:56:15.530536  115078 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.1 from cache
	I1206 19:56:15.530571  115078 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I1206 19:56:15.530630  115078 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I1206 19:56:13.177508  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:15.671903  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:14.963353  115591 pod_ready.go:92] pod "kube-proxy-pt8nj" in "kube-system" namespace has status "Ready":"True"
	I1206 19:56:14.963382  115591 pod_ready.go:81] duration metric: took 718.835702ms waiting for pod "kube-proxy-pt8nj" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:14.963391  115591 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:17.284373  115591 pod_ready.go:102] pod "kube-scheduler-embed-certs-209025" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:19.354814  115078 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.824152707s)
	I1206 19:56:19.354846  115078 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I1206 19:56:19.354874  115078 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1206 19:56:19.354924  115078 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1206 19:56:20.402300  115078 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.047341059s)
	I1206 19:56:20.402334  115078 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1206 19:56:20.402378  115078 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I1206 19:56:20.402442  115078 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I1206 19:56:17.672489  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:20.171526  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:19.771500  115591 pod_ready.go:102] pod "kube-scheduler-embed-certs-209025" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:22.273627  115591 pod_ready.go:102] pod "kube-scheduler-embed-certs-209025" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:23.269993  115591 pod_ready.go:92] pod "kube-scheduler-embed-certs-209025" in "kube-system" namespace has status "Ready":"True"
	I1206 19:56:23.270019  115591 pod_ready.go:81] duration metric: took 8.306621129s waiting for pod "kube-scheduler-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:23.270029  115591 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:22.575204  115078 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.17273177s)
	I1206 19:56:22.575240  115078 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I1206 19:56:22.575270  115078 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1
	I1206 19:56:22.575318  115078 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1
	I1206 19:56:25.335616  115078 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1: (2.760267154s)
	I1206 19:56:25.335646  115078 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.1 from cache
	I1206 19:56:25.335680  115078 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1
	I1206 19:56:25.335760  115078 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1
	I1206 19:56:22.175410  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:24.677136  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:23.486162  115217 kubeadm.go:787] kubelet initialised
	I1206 19:56:23.486192  115217 kubeadm.go:788] duration metric: took 47.560169603s waiting for restarted kubelet to initialise ...
	I1206 19:56:23.486203  115217 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 19:56:23.491797  115217 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-85xcj" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:23.499126  115217 pod_ready.go:92] pod "coredns-5644d7b6d9-85xcj" in "kube-system" namespace has status "Ready":"True"
	I1206 19:56:23.499149  115217 pod_ready.go:81] duration metric: took 7.327003ms waiting for pod "coredns-5644d7b6d9-85xcj" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:23.499160  115217 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-nrtk9" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:23.503979  115217 pod_ready.go:92] pod "coredns-5644d7b6d9-nrtk9" in "kube-system" namespace has status "Ready":"True"
	I1206 19:56:23.504002  115217 pod_ready.go:81] duration metric: took 4.834039ms waiting for pod "coredns-5644d7b6d9-nrtk9" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:23.504014  115217 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-448851" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:23.509110  115217 pod_ready.go:92] pod "etcd-old-k8s-version-448851" in "kube-system" namespace has status "Ready":"True"
	I1206 19:56:23.509132  115217 pod_ready.go:81] duration metric: took 5.109845ms waiting for pod "etcd-old-k8s-version-448851" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:23.509153  115217 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-448851" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:23.514641  115217 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-448851" in "kube-system" namespace has status "Ready":"True"
	I1206 19:56:23.514665  115217 pod_ready.go:81] duration metric: took 5.502762ms waiting for pod "kube-apiserver-old-k8s-version-448851" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:23.514677  115217 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-448851" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:23.886694  115217 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-448851" in "kube-system" namespace has status "Ready":"True"
	I1206 19:56:23.886726  115217 pod_ready.go:81] duration metric: took 372.040617ms waiting for pod "kube-controller-manager-old-k8s-version-448851" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:23.886741  115217 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-sw4qv" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:24.287638  115217 pod_ready.go:92] pod "kube-proxy-sw4qv" in "kube-system" namespace has status "Ready":"True"
	I1206 19:56:24.287662  115217 pod_ready.go:81] duration metric: took 400.914693ms waiting for pod "kube-proxy-sw4qv" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:24.287673  115217 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-448851" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:24.688298  115217 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-448851" in "kube-system" namespace has status "Ready":"True"
	I1206 19:56:24.688328  115217 pod_ready.go:81] duration metric: took 400.645544ms waiting for pod "kube-scheduler-old-k8s-version-448851" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:24.688343  115217 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:26.991669  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:25.288536  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:27.290135  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:29.291318  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:27.610095  115078 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1: (2.274298339s)
	I1206 19:56:27.610132  115078 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.1 from cache
	I1206 19:56:27.610163  115078 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1
	I1206 19:56:27.610219  115078 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1
	I1206 19:56:30.272712  115078 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1: (2.662458967s)
	I1206 19:56:30.272746  115078 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.1 from cache
	I1206 19:56:30.272782  115078 cache_images.go:123] Successfully loaded all cached images
	I1206 19:56:30.272789  115078 cache_images.go:92] LoadImages completed in 17.598011028s
	I1206 19:56:30.272883  115078 ssh_runner.go:195] Run: crio config
	I1206 19:56:30.341321  115078 cni.go:84] Creating CNI manager for ""
	I1206 19:56:30.341346  115078 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 19:56:30.341368  115078 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1206 19:56:30.341392  115078 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.5 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-989559 NodeName:no-preload-989559 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 19:56:30.341597  115078 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-989559"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 19:56:30.341693  115078 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-989559 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.1 ClusterName:no-preload-989559 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1206 19:56:30.341758  115078 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.1
	I1206 19:56:30.351650  115078 binaries.go:44] Found k8s binaries, skipping transfer
	I1206 19:56:30.351729  115078 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 19:56:30.360413  115078 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1206 19:56:30.376399  115078 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1206 19:56:30.392522  115078 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I1206 19:56:30.409313  115078 ssh_runner.go:195] Run: grep 192.168.39.5	control-plane.minikube.internal$ /etc/hosts
	I1206 19:56:30.413355  115078 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.5	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 19:56:30.426797  115078 certs.go:56] Setting up /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/no-preload-989559 for IP: 192.168.39.5
	I1206 19:56:30.426854  115078 certs.go:190] acquiring lock for shared ca certs: {Name:mkf8fbf7b590617ef4dc6c3a4acb742ae26f89ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 19:56:30.427070  115078 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.key
	I1206 19:56:30.427134  115078 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.key
	I1206 19:56:30.427240  115078 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/no-preload-989559/client.key
	I1206 19:56:30.427311  115078 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/no-preload-989559/apiserver.key.c9b343a5
	I1206 19:56:30.427350  115078 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/no-preload-989559/proxy-client.key
	I1206 19:56:30.427454  115078 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834.pem (1338 bytes)
	W1206 19:56:30.427508  115078 certs.go:433] ignoring /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834_empty.pem, impossibly tiny 0 bytes
	I1206 19:56:30.427521  115078 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 19:56:30.427550  115078 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem (1082 bytes)
	I1206 19:56:30.427571  115078 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem (1123 bytes)
	I1206 19:56:30.427593  115078 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem (1679 bytes)
	I1206 19:56:30.427634  115078 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem (1708 bytes)
	I1206 19:56:30.428313  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/no-preload-989559/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1206 19:56:30.452268  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/no-preload-989559/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 19:56:30.476793  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/no-preload-989559/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 19:56:30.503751  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/no-preload-989559/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 19:56:30.530680  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 19:56:30.557770  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 19:56:30.582244  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 19:56:30.608096  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 19:56:30.634585  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834.pem --> /usr/share/ca-certificates/70834.pem (1338 bytes)
	I1206 19:56:30.660669  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem --> /usr/share/ca-certificates/708342.pem (1708 bytes)
	I1206 19:56:30.686987  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 19:56:30.711098  115078 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 19:56:30.727576  115078 ssh_runner.go:195] Run: openssl version
	I1206 19:56:30.733568  115078 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/708342.pem && ln -fs /usr/share/ca-certificates/708342.pem /etc/ssl/certs/708342.pem"
	I1206 19:56:30.743777  115078 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/708342.pem
	I1206 19:56:30.748976  115078 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  6 18:50 /usr/share/ca-certificates/708342.pem
	I1206 19:56:30.749033  115078 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/708342.pem
	I1206 19:56:30.755465  115078 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/708342.pem /etc/ssl/certs/3ec20f2e.0"
	I1206 19:56:30.766285  115078 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1206 19:56:30.777164  115078 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:56:30.782160  115078 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  6 18:41 /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:56:30.782228  115078 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:56:30.789394  115078 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1206 19:56:30.801293  115078 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/70834.pem && ln -fs /usr/share/ca-certificates/70834.pem /etc/ssl/certs/70834.pem"
	I1206 19:56:30.812646  115078 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/70834.pem
	I1206 19:56:30.818147  115078 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  6 18:50 /usr/share/ca-certificates/70834.pem
	I1206 19:56:30.818209  115078 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/70834.pem
	I1206 19:56:30.824161  115078 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/70834.pem /etc/ssl/certs/51391683.0"
	I1206 19:56:30.834389  115078 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1206 19:56:30.839518  115078 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 19:56:30.845997  115078 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 19:56:30.852229  115078 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 19:56:30.858622  115078 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 19:56:30.864675  115078 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 19:56:30.870945  115078 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 19:56:30.878301  115078 kubeadm.go:404] StartCluster: {Name:no-preload-989559 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.1 ClusterName:no-preload-989559 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.5 Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 19:56:30.878438  115078 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 19:56:30.878513  115078 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 19:56:30.921588  115078 cri.go:89] found id: ""
	I1206 19:56:30.921692  115078 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 19:56:30.932160  115078 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1206 19:56:30.932190  115078 kubeadm.go:636] restartCluster start
	I1206 19:56:30.932264  115078 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 19:56:30.942019  115078 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:30.943237  115078 kubeconfig.go:92] found "no-preload-989559" server: "https://192.168.39.5:8443"
	I1206 19:56:30.945618  115078 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 19:56:30.954582  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:30.954655  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:30.966532  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:30.966555  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:30.966602  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:30.979930  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:27.172625  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:29.671318  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:28.992218  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:30.994420  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:31.786922  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:33.787251  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:31.480021  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:31.480135  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:31.493287  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:31.980317  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:31.980409  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:31.994348  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:32.480929  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:32.481020  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:32.494940  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:32.980449  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:32.980559  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:32.993316  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:33.481040  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:33.481150  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:33.494210  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:33.980837  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:33.980936  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:33.994280  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:34.480389  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:34.480492  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:34.493915  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:34.980458  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:34.980569  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:34.994306  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:35.480788  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:35.480897  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:35.495397  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:35.980815  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:35.980919  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:32.171889  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:34.669989  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:33.491932  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:35.492626  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:37.991389  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:35.787950  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:38.288581  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	W1206 19:56:35.994848  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:36.480833  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:36.480959  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:36.496053  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:36.980074  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:36.980197  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:36.994615  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:37.480110  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:37.480203  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:37.494380  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:37.980922  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:37.981009  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:37.994865  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:38.480432  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:38.480536  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:38.494938  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:38.980148  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:38.980250  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:38.995427  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:39.481067  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:39.481153  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:39.494631  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:39.980142  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:39.980255  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:39.991638  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:40.480132  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:40.480205  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:40.492507  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:40.955413  115078 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1206 19:56:40.955478  115078 kubeadm.go:1135] stopping kube-system containers ...
	I1206 19:56:40.955492  115078 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1206 19:56:40.955574  115078 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 19:56:36.673986  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:39.172561  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:41.177049  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:40.490976  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:42.492210  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:40.293997  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:42.789693  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:40.997724  115078 cri.go:89] found id: ""
	I1206 19:56:40.997783  115078 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1206 19:56:41.013137  115078 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 19:56:41.021612  115078 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 19:56:41.021667  115078 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 19:56:41.030846  115078 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1206 19:56:41.030878  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:56:41.160850  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:56:42.395616  115078 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.234715721s)
	I1206 19:56:42.395651  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:56:42.595187  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:56:42.688245  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:56:42.769464  115078 api_server.go:52] waiting for apiserver process to appear ...
	I1206 19:56:42.769566  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:42.783010  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:43.303551  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:43.803070  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:44.303922  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:44.803326  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:45.302954  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:45.323804  115078 api_server.go:72] duration metric: took 2.55435107s to wait for apiserver process to appear ...
	I1206 19:56:45.323839  115078 api_server.go:88] waiting for apiserver healthz status ...
	I1206 19:56:45.323865  115078 api_server.go:253] Checking apiserver healthz at https://192.168.39.5:8443/healthz ...
	I1206 19:56:45.324588  115078 api_server.go:269] stopped: https://192.168.39.5:8443/healthz: Get "https://192.168.39.5:8443/healthz": dial tcp 192.168.39.5:8443: connect: connection refused
	I1206 19:56:45.324632  115078 api_server.go:253] Checking apiserver healthz at https://192.168.39.5:8443/healthz ...
	I1206 19:56:45.325115  115078 api_server.go:269] stopped: https://192.168.39.5:8443/healthz: Get "https://192.168.39.5:8443/healthz": dial tcp 192.168.39.5:8443: connect: connection refused
	I1206 19:56:45.825883  115078 api_server.go:253] Checking apiserver healthz at https://192.168.39.5:8443/healthz ...
	I1206 19:56:43.670089  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:45.670833  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:44.994670  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:47.492548  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:45.288109  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:47.788636  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:49.759033  115078 api_server.go:279] https://192.168.39.5:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1206 19:56:49.759089  115078 api_server.go:103] status: https://192.168.39.5:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1206 19:56:49.759117  115078 api_server.go:253] Checking apiserver healthz at https://192.168.39.5:8443/healthz ...
	I1206 19:56:49.778467  115078 api_server.go:279] https://192.168.39.5:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1206 19:56:49.778502  115078 api_server.go:103] status: https://192.168.39.5:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1206 19:56:49.825793  115078 api_server.go:253] Checking apiserver healthz at https://192.168.39.5:8443/healthz ...
	I1206 19:56:49.888751  115078 api_server.go:279] https://192.168.39.5:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1206 19:56:49.888801  115078 api_server.go:103] status: https://192.168.39.5:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1206 19:56:50.325211  115078 api_server.go:253] Checking apiserver healthz at https://192.168.39.5:8443/healthz ...
	I1206 19:56:50.330395  115078 api_server.go:279] https://192.168.39.5:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1206 19:56:50.330438  115078 api_server.go:103] status: https://192.168.39.5:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1206 19:56:50.826038  115078 api_server.go:253] Checking apiserver healthz at https://192.168.39.5:8443/healthz ...
	I1206 19:56:50.830801  115078 api_server.go:279] https://192.168.39.5:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1206 19:56:50.830836  115078 api_server.go:103] status: https://192.168.39.5:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1206 19:56:51.325298  115078 api_server.go:253] Checking apiserver healthz at https://192.168.39.5:8443/healthz ...
	I1206 19:56:51.331295  115078 api_server.go:279] https://192.168.39.5:8443/healthz returned 200:
	ok
	I1206 19:56:51.340412  115078 api_server.go:141] control plane version: v1.29.0-rc.1
	I1206 19:56:51.340445  115078 api_server.go:131] duration metric: took 6.016598018s to wait for apiserver health ...
	I1206 19:56:51.340457  115078 cni.go:84] Creating CNI manager for ""
	I1206 19:56:51.340465  115078 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 19:56:51.383227  115078 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1206 19:56:47.671090  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:50.173835  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:49.494360  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:51.991886  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:51.385027  115078 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1206 19:56:51.399942  115078 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1206 19:56:51.422533  115078 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 19:56:51.446615  115078 system_pods.go:59] 8 kube-system pods found
	I1206 19:56:51.446661  115078 system_pods.go:61] "coredns-76f75df574-h9pkz" [05501356-bf9b-4a99-a1b9-40d0caef38db] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 19:56:51.446671  115078 system_pods.go:61] "etcd-no-preload-989559" [6c1cb748-a6a8-4583-b8fd-adf37e05b771] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1206 19:56:51.446684  115078 system_pods.go:61] "kube-apiserver-no-preload-989559" [51d8b7c6-0cef-4832-96b2-5040c0725310] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1206 19:56:51.446698  115078 system_pods.go:61] "kube-controller-manager-no-preload-989559" [cc8dfb88-9990-488f-9150-5c643143dcf1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 19:56:51.446707  115078 system_pods.go:61] "kube-proxy-zgqvt" [550b2491-c14f-47c4-82d5-1301fa351305] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1206 19:56:51.446716  115078 system_pods.go:61] "kube-scheduler-no-preload-989559" [53a5031e-51aa-4867-88ff-1c7972a0cfa7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1206 19:56:51.446731  115078 system_pods.go:61] "metrics-server-57f55c9bc5-vz7qc" [97c1bcd2-eabc-4029-bb02-5bbfd4d96c0f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 19:56:51.446739  115078 system_pods.go:61] "storage-provisioner" [c4d98de3-12ec-47f6-a6a6-f1dc61b479be] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 19:56:51.446749  115078 system_pods.go:74] duration metric: took 24.188803ms to wait for pod list to return data ...
	I1206 19:56:51.446758  115078 node_conditions.go:102] verifying NodePressure condition ...
	I1206 19:56:51.452770  115078 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1206 19:56:51.452803  115078 node_conditions.go:123] node cpu capacity is 2
	I1206 19:56:51.452817  115078 node_conditions.go:105] duration metric: took 6.05327ms to run NodePressure ...
	I1206 19:56:51.452840  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:56:51.740786  115078 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1206 19:56:51.746512  115078 kubeadm.go:787] kubelet initialised
	I1206 19:56:51.746541  115078 kubeadm.go:788] duration metric: took 5.720787ms waiting for restarted kubelet to initialise ...
	I1206 19:56:51.746550  115078 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 19:56:51.752751  115078 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-h9pkz" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:51.761003  115078 pod_ready.go:97] node "no-preload-989559" hosting pod "coredns-76f75df574-h9pkz" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:51.761032  115078 pod_ready.go:81] duration metric: took 8.254381ms waiting for pod "coredns-76f75df574-h9pkz" in "kube-system" namespace to be "Ready" ...
	E1206 19:56:51.761043  115078 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-989559" hosting pod "coredns-76f75df574-h9pkz" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:51.761052  115078 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:51.766223  115078 pod_ready.go:97] node "no-preload-989559" hosting pod "etcd-no-preload-989559" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:51.766248  115078 pod_ready.go:81] duration metric: took 5.184525ms waiting for pod "etcd-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	E1206 19:56:51.766259  115078 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-989559" hosting pod "etcd-no-preload-989559" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:51.766271  115078 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:51.771516  115078 pod_ready.go:97] node "no-preload-989559" hosting pod "kube-apiserver-no-preload-989559" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:51.771541  115078 pod_ready.go:81] duration metric: took 5.262069ms waiting for pod "kube-apiserver-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	E1206 19:56:51.771552  115078 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-989559" hosting pod "kube-apiserver-no-preload-989559" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:51.771561  115078 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:51.827774  115078 pod_ready.go:97] node "no-preload-989559" hosting pod "kube-controller-manager-no-preload-989559" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:51.827804  115078 pod_ready.go:81] duration metric: took 56.232455ms waiting for pod "kube-controller-manager-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	E1206 19:56:51.827818  115078 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-989559" hosting pod "kube-controller-manager-no-preload-989559" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:51.827826  115078 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zgqvt" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:52.231699  115078 pod_ready.go:97] node "no-preload-989559" hosting pod "kube-proxy-zgqvt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:52.231761  115078 pod_ready.go:81] duration metric: took 403.922333ms waiting for pod "kube-proxy-zgqvt" in "kube-system" namespace to be "Ready" ...
	E1206 19:56:52.231774  115078 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-989559" hosting pod "kube-proxy-zgqvt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:52.231790  115078 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:52.626827  115078 pod_ready.go:97] node "no-preload-989559" hosting pod "kube-scheduler-no-preload-989559" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:52.626863  115078 pod_ready.go:81] duration metric: took 395.06457ms waiting for pod "kube-scheduler-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	E1206 19:56:52.626877  115078 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-989559" hosting pod "kube-scheduler-no-preload-989559" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:52.626889  115078 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:53.028166  115078 pod_ready.go:97] node "no-preload-989559" hosting pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:53.028201  115078 pod_ready.go:81] duration metric: took 401.294916ms waiting for pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace to be "Ready" ...
	E1206 19:56:53.028214  115078 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-989559" hosting pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:53.028226  115078 pod_ready.go:38] duration metric: took 1.281664253s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 19:56:53.028249  115078 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 19:56:53.057673  115078 ops.go:34] apiserver oom_adj: -16
	I1206 19:56:53.057706  115078 kubeadm.go:640] restartCluster took 22.12550727s
	I1206 19:56:53.057718  115078 kubeadm.go:406] StartCluster complete in 22.179430573s
	I1206 19:56:53.057756  115078 settings.go:142] acquiring lock: {Name:mkfeb988d43ca5824ac2b3af603600358ae0dd6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 19:56:53.057857  115078 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17740-63652/kubeconfig
	I1206 19:56:53.059885  115078 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-63652/kubeconfig: {Name:mkb891a2b2c86b4a1b0f4bb8fd4e51233eb9c683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 19:56:53.060125  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 19:56:53.060244  115078 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1206 19:56:53.060337  115078 addons.go:69] Setting storage-provisioner=true in profile "no-preload-989559"
	I1206 19:56:53.060364  115078 addons.go:231] Setting addon storage-provisioner=true in "no-preload-989559"
	I1206 19:56:53.060367  115078 config.go:182] Loaded profile config "no-preload-989559": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.1
	W1206 19:56:53.060375  115078 addons.go:240] addon storage-provisioner should already be in state true
	I1206 19:56:53.060404  115078 addons.go:69] Setting default-storageclass=true in profile "no-preload-989559"
	I1206 19:56:53.060415  115078 addons.go:69] Setting metrics-server=true in profile "no-preload-989559"
	I1206 19:56:53.060430  115078 addons.go:231] Setting addon metrics-server=true in "no-preload-989559"
	I1206 19:56:53.060433  115078 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-989559"
	W1206 19:56:53.060440  115078 addons.go:240] addon metrics-server should already be in state true
	I1206 19:56:53.060452  115078 host.go:66] Checking if "no-preload-989559" exists ...
	I1206 19:56:53.060472  115078 host.go:66] Checking if "no-preload-989559" exists ...
	I1206 19:56:53.060856  115078 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:56:53.060865  115078 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:56:53.060889  115078 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:56:53.060894  115078 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:56:53.060917  115078 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:56:53.060894  115078 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:56:53.065950  115078 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-989559" context rescaled to 1 replicas
	I1206 19:56:53.065992  115078 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.5 Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 19:56:53.068038  115078 out.go:177] * Verifying Kubernetes components...
	I1206 19:56:53.069775  115078 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 19:56:53.077795  115078 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34735
	I1206 19:56:53.078120  115078 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46235
	I1206 19:56:53.078274  115078 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:56:53.078714  115078 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:56:53.078902  115078 main.go:141] libmachine: Using API Version  1
	I1206 19:56:53.078928  115078 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:56:53.079207  115078 main.go:141] libmachine: Using API Version  1
	I1206 19:56:53.079226  115078 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:56:53.079272  115078 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:56:53.079514  115078 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:56:53.079727  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetState
	I1206 19:56:53.079865  115078 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:56:53.079899  115078 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:56:53.083670  115078 addons.go:231] Setting addon default-storageclass=true in "no-preload-989559"
	W1206 19:56:53.083695  115078 addons.go:240] addon default-storageclass should already be in state true
	I1206 19:56:53.083724  115078 host.go:66] Checking if "no-preload-989559" exists ...
	I1206 19:56:53.084178  115078 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:56:53.084230  115078 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:56:53.097845  115078 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36625
	I1206 19:56:53.098357  115078 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:56:53.099058  115078 main.go:141] libmachine: Using API Version  1
	I1206 19:56:53.099080  115078 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:56:53.099409  115078 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:56:53.099633  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetState
	I1206 19:56:53.101625  115078 main.go:141] libmachine: (no-preload-989559) Calling .DriverName
	I1206 19:56:53.103641  115078 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1206 19:56:53.105081  115078 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44431
	I1206 19:56:53.105105  115078 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1206 19:56:53.105123  115078 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1206 19:56:53.105150  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHHostname
	I1206 19:56:53.104327  115078 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34423
	I1206 19:56:53.105556  115078 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:56:53.105777  115078 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:56:53.105983  115078 main.go:141] libmachine: Using API Version  1
	I1206 19:56:53.105998  115078 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:56:53.106312  115078 main.go:141] libmachine: Using API Version  1
	I1206 19:56:53.106328  115078 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:56:53.106619  115078 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:56:53.106910  115078 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:56:53.107192  115078 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:56:53.107229  115078 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:56:53.107338  115078 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:56:53.107398  115078 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:56:53.108297  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:53.108969  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:53.108999  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:53.109150  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHPort
	I1206 19:56:53.109436  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:53.109586  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHUsername
	I1206 19:56:53.109725  115078 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/no-preload-989559/id_rsa Username:docker}
	I1206 19:56:53.123985  115078 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46161
	I1206 19:56:53.124496  115078 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:56:53.125052  115078 main.go:141] libmachine: Using API Version  1
	I1206 19:56:53.125078  115078 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:56:53.125325  115078 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36509
	I1206 19:56:53.125570  115078 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:56:53.125785  115078 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:56:53.125826  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetState
	I1206 19:56:53.126385  115078 main.go:141] libmachine: Using API Version  1
	I1206 19:56:53.126413  115078 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:56:53.126875  115078 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:56:53.127050  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetState
	I1206 19:56:53.127923  115078 main.go:141] libmachine: (no-preload-989559) Calling .DriverName
	I1206 19:56:53.128212  115078 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 19:56:53.128226  115078 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 19:56:53.128242  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHHostname
	I1206 19:56:53.128747  115078 main.go:141] libmachine: (no-preload-989559) Calling .DriverName
	I1206 19:56:53.131043  115078 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 19:56:53.131487  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:53.132638  115078 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 19:56:53.132645  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:53.132651  115078 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 19:56:53.132667  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHHostname
	I1206 19:56:53.132682  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:53.132132  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHPort
	I1206 19:56:53.133425  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:53.133636  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHUsername
	I1206 19:56:53.133870  115078 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/no-preload-989559/id_rsa Username:docker}
	I1206 19:56:53.136039  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:53.136583  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:53.136612  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:53.136850  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHPort
	I1206 19:56:53.137087  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:53.137390  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHUsername
	I1206 19:56:53.137583  115078 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/no-preload-989559/id_rsa Username:docker}
	I1206 19:56:53.247726  115078 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1206 19:56:53.247751  115078 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1206 19:56:53.271421  115078 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 19:56:53.296149  115078 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1206 19:56:53.296181  115078 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1206 19:56:53.303580  115078 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 19:56:53.350607  115078 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1206 19:56:53.350607  115078 node_ready.go:35] waiting up to 6m0s for node "no-preload-989559" to be "Ready" ...
	I1206 19:56:53.355315  115078 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 19:56:53.355336  115078 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1206 19:56:53.392730  115078 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 19:56:53.624768  115078 main.go:141] libmachine: Making call to close driver server
	I1206 19:56:53.624798  115078 main.go:141] libmachine: (no-preload-989559) Calling .Close
	I1206 19:56:53.625224  115078 main.go:141] libmachine: Successfully made call to close driver server
	I1206 19:56:53.625330  115078 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 19:56:53.625353  115078 main.go:141] libmachine: Making call to close driver server
	I1206 19:56:53.625393  115078 main.go:141] libmachine: (no-preload-989559) Calling .Close
	I1206 19:56:53.625227  115078 main.go:141] libmachine: (no-preload-989559) DBG | Closing plugin on server side
	I1206 19:56:53.625849  115078 main.go:141] libmachine: Successfully made call to close driver server
	I1206 19:56:53.625874  115078 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 19:56:53.632671  115078 main.go:141] libmachine: Making call to close driver server
	I1206 19:56:53.632691  115078 main.go:141] libmachine: (no-preload-989559) Calling .Close
	I1206 19:56:53.632983  115078 main.go:141] libmachine: Successfully made call to close driver server
	I1206 19:56:53.633005  115078 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 19:56:54.433395  115078 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.12977215s)
	I1206 19:56:54.433462  115078 main.go:141] libmachine: Making call to close driver server
	I1206 19:56:54.433491  115078 main.go:141] libmachine: (no-preload-989559) Calling .Close
	I1206 19:56:54.433360  115078 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.040565961s)
	I1206 19:56:54.433546  115078 main.go:141] libmachine: Making call to close driver server
	I1206 19:56:54.433567  115078 main.go:141] libmachine: (no-preload-989559) Calling .Close
	I1206 19:56:54.433833  115078 main.go:141] libmachine: Successfully made call to close driver server
	I1206 19:56:54.433854  115078 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 19:56:54.433863  115078 main.go:141] libmachine: Making call to close driver server
	I1206 19:56:54.433867  115078 main.go:141] libmachine: (no-preload-989559) DBG | Closing plugin on server side
	I1206 19:56:54.433871  115078 main.go:141] libmachine: (no-preload-989559) Calling .Close
	I1206 19:56:54.433842  115078 main.go:141] libmachine: (no-preload-989559) DBG | Closing plugin on server side
	I1206 19:56:54.433908  115078 main.go:141] libmachine: Successfully made call to close driver server
	I1206 19:56:54.433926  115078 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 19:56:54.433939  115078 main.go:141] libmachine: Making call to close driver server
	I1206 19:56:54.433951  115078 main.go:141] libmachine: (no-preload-989559) Calling .Close
	I1206 19:56:54.434124  115078 main.go:141] libmachine: Successfully made call to close driver server
	I1206 19:56:54.434148  115078 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 19:56:54.434153  115078 main.go:141] libmachine: (no-preload-989559) DBG | Closing plugin on server side
	I1206 19:56:54.434199  115078 main.go:141] libmachine: (no-preload-989559) DBG | Closing plugin on server side
	I1206 19:56:54.434212  115078 main.go:141] libmachine: Successfully made call to close driver server
	I1206 19:56:54.434224  115078 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 19:56:54.434240  115078 addons.go:467] Verifying addon metrics-server=true in "no-preload-989559"
	I1206 19:56:54.437357  115078 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1206 19:56:50.289141  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:52.289568  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:54.438928  115078 addons.go:502] enable addons completed in 1.378684523s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1206 19:56:55.439812  115078 node_ready.go:58] node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:52.174520  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:54.175288  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:54.492713  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:56.493106  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:54.789039  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:57.288485  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:59.289450  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:57.931320  115078 node_ready.go:58] node "no-preload-989559" has status "Ready":"False"
	I1206 19:57:00.430485  115078 node_ready.go:49] node "no-preload-989559" has status "Ready":"True"
	I1206 19:57:00.430517  115078 node_ready.go:38] duration metric: took 7.079875254s waiting for node "no-preload-989559" to be "Ready" ...
	I1206 19:57:00.430530  115078 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 19:57:00.436772  115078 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-h9pkz" in "kube-system" namespace to be "Ready" ...
	I1206 19:57:00.442667  115078 pod_ready.go:92] pod "coredns-76f75df574-h9pkz" in "kube-system" namespace has status "Ready":"True"
	I1206 19:57:00.442688  115078 pod_ready.go:81] duration metric: took 5.884841ms waiting for pod "coredns-76f75df574-h9pkz" in "kube-system" namespace to be "Ready" ...
	I1206 19:57:00.442701  115078 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:56.671845  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:59.172634  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:01.175416  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:58.991760  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:00.992295  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:01.787443  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:03.787988  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:02.468096  115078 pod_ready.go:102] pod "etcd-no-preload-989559" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:04.965881  115078 pod_ready.go:92] pod "etcd-no-preload-989559" in "kube-system" namespace has status "Ready":"True"
	I1206 19:57:04.965905  115078 pod_ready.go:81] duration metric: took 4.523195911s waiting for pod "etcd-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	I1206 19:57:04.965916  115078 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	I1206 19:57:04.971414  115078 pod_ready.go:92] pod "kube-apiserver-no-preload-989559" in "kube-system" namespace has status "Ready":"True"
	I1206 19:57:04.971433  115078 pod_ready.go:81] duration metric: took 5.510214ms waiting for pod "kube-apiserver-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	I1206 19:57:04.971441  115078 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	I1206 19:57:04.977851  115078 pod_ready.go:92] pod "kube-controller-manager-no-preload-989559" in "kube-system" namespace has status "Ready":"True"
	I1206 19:57:04.977870  115078 pod_ready.go:81] duration metric: took 6.422623ms waiting for pod "kube-controller-manager-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	I1206 19:57:04.977878  115078 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zgqvt" in "kube-system" namespace to be "Ready" ...
	I1206 19:57:04.985189  115078 pod_ready.go:92] pod "kube-proxy-zgqvt" in "kube-system" namespace has status "Ready":"True"
	I1206 19:57:04.985215  115078 pod_ready.go:81] duration metric: took 7.330713ms waiting for pod "kube-proxy-zgqvt" in "kube-system" namespace to be "Ready" ...
	I1206 19:57:04.985224  115078 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	I1206 19:57:05.230810  115078 pod_ready.go:92] pod "kube-scheduler-no-preload-989559" in "kube-system" namespace has status "Ready":"True"
	I1206 19:57:05.230835  115078 pod_ready.go:81] duration metric: took 245.59313ms waiting for pod "kube-scheduler-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	I1206 19:57:05.230845  115078 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace to be "Ready" ...
	I1206 19:57:03.189551  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:05.673064  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:03.491815  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:05.991689  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:07.992156  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:05.789026  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:07.789964  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:07.538620  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:10.040533  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:08.171042  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:10.671754  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:10.490556  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:12.491886  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:10.287716  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:12.788212  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:12.538291  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:14.541614  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:12.672138  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:15.171421  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:14.992060  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:17.502730  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:14.788301  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:17.287038  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:19.288646  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:17.038893  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:19.543137  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:17.671258  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:20.170885  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:19.991949  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:22.491591  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:21.787339  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:23.788729  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:22.041590  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:24.540137  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:22.171069  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:24.670919  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:24.992198  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:27.492171  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:26.290524  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:28.787761  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:27.039132  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:29.542736  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:27.170762  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:29.171345  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:29.992006  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:32.490556  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:31.288189  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:33.787785  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:32.039418  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:34.039727  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:31.670563  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:34.170705  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:36.171236  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:34.492161  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:36.492522  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:35.788140  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:37.788283  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:36.540765  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:39.038645  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:38.171622  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:40.670580  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:38.990433  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:40.990810  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:42.992228  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:40.287403  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:42.287578  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:44.287701  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:41.039767  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:43.539800  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:45.543374  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:43.173769  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:45.670574  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:44.995625  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:47.492316  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:46.289397  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:48.787659  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:48.038286  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:50.039013  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:48.176705  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:50.670177  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:49.991919  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:52.491478  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:50.788175  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:53.288824  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:52.040785  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:54.538521  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:53.173256  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:55.670940  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:54.492526  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:56.493207  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:55.787745  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:57.788237  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:56.539097  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:59.039024  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:58.174463  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:00.674095  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:58.990652  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:00.993255  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:59.788454  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:02.287774  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:04.288180  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:01.042813  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:03.541670  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:03.171100  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:05.673480  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:03.492375  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:05.991094  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:07.992159  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:06.288916  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:08.289817  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:06.038556  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:08.038962  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:10.539560  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:08.171785  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:10.671152  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:09.993042  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:12.491776  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:10.790823  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:12.791724  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:12.540234  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:14.542433  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:12.672062  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:15.170654  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:14.993921  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:17.492163  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:15.289223  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:17.787808  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:17.038754  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:19.039749  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:17.171210  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:19.670633  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:19.991157  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:21.991531  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:19.788614  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:22.288567  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:21.040007  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:23.047504  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:25.539859  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:21.671920  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:24.173543  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:23.993354  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:26.491975  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:24.789151  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:26.789703  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:29.287981  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:28.038595  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:30.039044  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:26.670809  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:29.171281  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:28.492552  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:30.990797  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:32.991467  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:31.289190  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:33.788860  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:32.046392  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:34.538829  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:31.671784  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:33.672095  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:36.171077  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:34.992478  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:37.492021  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:35.789666  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:38.287860  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:37.038795  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:39.537643  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:38.670088  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:41.171066  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:39.991754  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:41.994379  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:40.288183  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:42.788826  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:41.539212  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:43.543524  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:43.674139  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:46.170213  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:44.491092  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:46.491632  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:45.287473  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:47.288157  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:49.289525  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:46.038254  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:48.039117  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:50.039290  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:48.170319  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:50.671091  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:48.492359  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:50.992132  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:51.787368  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:53.788448  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:52.039474  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:54.540427  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:53.169921  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:55.171727  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:53.492764  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:55.993038  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:56.287644  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:58.288171  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:57.038915  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:59.039626  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:57.671011  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:59.671928  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:58.491565  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:00.492398  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:02.994198  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:00.788591  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:02.789729  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:01.540414  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:03.547448  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:02.172546  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:04.670363  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:05.492399  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:07.991600  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:05.287805  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:07.289128  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:06.039393  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:08.040259  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:10.541882  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:06.670653  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:09.172460  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:10.491981  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:12.991797  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:09.788064  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:12.291318  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:12.544283  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:15.040829  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:11.673737  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:14.172972  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:14.992556  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:17.492610  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:14.788287  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:16.789265  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:19.287925  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:17.542363  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:20.039068  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:16.674724  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:18.675236  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:21.170028  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:19.493199  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:21.992164  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:21.288023  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:23.289315  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:22.539662  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:25.038813  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:23.170153  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:25.172299  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:24.491811  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:26.492671  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:25.788309  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:27.791911  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:27.539832  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:29.540277  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:27.671148  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:30.171591  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:28.990920  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:30.992085  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:32.992394  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:30.288522  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:32.288574  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:31.542448  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:34.039116  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:32.671751  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:35.169968  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:35.492708  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:37.992344  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:34.787925  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:36.788270  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:38.788369  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:36.539113  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:39.040215  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:37.171340  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:39.171482  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:40.491091  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:42.491915  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:40.789138  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:43.287352  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:41.538818  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:43.539787  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:41.670936  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:43.671019  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:45.671158  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:44.992666  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:47.491581  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:45.287493  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:47.787403  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:46.039500  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:48.538497  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:50.539750  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:48.171563  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:50.673901  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:49.991083  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:51.991943  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:49.788072  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:51.788139  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:53.788885  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:53.039532  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:55.539183  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:53.177102  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:55.670778  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:53.992408  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:56.492592  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:56.288587  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:58.288722  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:57.539766  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:00.038890  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:58.171948  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:00.173211  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:58.492926  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:00.992517  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:02.992971  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:00.291465  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:02.292084  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:02.039986  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:04.541022  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:02.674513  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:04.407290  115497 pod_ready.go:81] duration metric: took 4m0.000215571s waiting for pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace to be "Ready" ...
	E1206 20:00:04.407325  115497 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1206 20:00:04.407343  115497 pod_ready.go:38] duration metric: took 4m12.62023597s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 20:00:04.407376  115497 kubeadm.go:640] restartCluster took 4m33.115368763s
	W1206 20:00:04.407460  115497 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1206 20:00:04.407558  115497 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1206 20:00:05.492129  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:07.493228  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:04.788290  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:06.789396  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:08.789507  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:06.541064  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:09.040499  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:09.992817  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:12.492671  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:11.288813  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:13.788228  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:11.540420  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:13.540837  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:14.492803  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:16.991852  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:18.762771  115497 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.35517444s)
	I1206 20:00:18.762878  115497 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 20:00:18.777691  115497 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 20:00:18.788508  115497 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 20:00:18.798417  115497 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 20:00:18.798483  115497 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1206 20:00:18.858377  115497 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1206 20:00:18.858486  115497 kubeadm.go:322] [preflight] Running pre-flight checks
	I1206 20:00:19.020664  115497 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 20:00:19.020845  115497 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 20:00:19.020979  115497 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1206 20:00:19.294254  115497 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 20:00:15.788560  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:18.288173  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:19.296186  115497 out.go:204]   - Generating certificates and keys ...
	I1206 20:00:19.296294  115497 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1206 20:00:19.296394  115497 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1206 20:00:19.296512  115497 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 20:00:19.296601  115497 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1206 20:00:19.296712  115497 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 20:00:19.296779  115497 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1206 20:00:19.296938  115497 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1206 20:00:19.297044  115497 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1206 20:00:19.297141  115497 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 20:00:19.297228  115497 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 20:00:19.297296  115497 kubeadm.go:322] [certs] Using the existing "sa" key
	I1206 20:00:19.297374  115497 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 20:00:19.401712  115497 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 20:00:19.667664  115497 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 20:00:19.977926  115497 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 20:00:20.161984  115497 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 20:00:20.162704  115497 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 20:00:20.165273  115497 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 20:00:16.040687  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:18.540495  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:20.167168  115497 out.go:204]   - Booting up control plane ...
	I1206 20:00:20.167327  115497 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 20:00:20.167488  115497 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 20:00:20.167596  115497 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 20:00:20.186839  115497 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 20:00:20.187950  115497 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 20:00:20.188122  115497 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1206 20:00:20.329099  115497 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1206 20:00:18.991946  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:21.490687  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:20.290780  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:22.293161  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:23.270450  115591 pod_ready.go:81] duration metric: took 4m0.000401122s waiting for pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace to be "Ready" ...
	E1206 20:00:23.270504  115591 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1206 20:00:23.270527  115591 pod_ready.go:38] duration metric: took 4m9.100871827s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 20:00:23.270576  115591 kubeadm.go:640] restartCluster took 4m28.999844958s
	W1206 20:00:23.270666  115591 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1206 20:00:23.270705  115591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1206 20:00:21.040410  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:23.041625  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:25.044168  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:23.492875  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:24.689131  115217 pod_ready.go:81] duration metric: took 4m0.000750192s waiting for pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace to be "Ready" ...
	E1206 20:00:24.689173  115217 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1206 20:00:24.689203  115217 pod_ready.go:38] duration metric: took 4m1.202987977s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 20:00:24.689247  115217 kubeadm.go:640] restartCluster took 5m10.459408033s
	W1206 20:00:24.689356  115217 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1206 20:00:24.689392  115217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1206 20:00:29.334312  115497 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.004152 seconds
	I1206 20:00:29.334473  115497 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 20:00:29.360390  115497 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 20:00:29.898911  115497 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 20:00:29.899167  115497 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-380424 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1206 20:00:30.416589  115497 kubeadm.go:322] [bootstrap-token] Using token: gsw79m.btql0t11yc11efah
	I1206 20:00:30.418388  115497 out.go:204]   - Configuring RBAC rules ...
	I1206 20:00:30.418538  115497 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 20:00:30.424651  115497 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1206 20:00:30.439637  115497 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 20:00:30.443854  115497 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 20:00:30.448439  115497 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 20:00:30.454084  115497 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 20:00:30.473340  115497 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1206 20:00:30.748803  115497 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1206 20:00:30.835721  115497 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1206 20:00:30.837289  115497 kubeadm.go:322] 
	I1206 20:00:30.837362  115497 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1206 20:00:30.837381  115497 kubeadm.go:322] 
	I1206 20:00:30.837449  115497 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1206 20:00:30.837457  115497 kubeadm.go:322] 
	I1206 20:00:30.837485  115497 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1206 20:00:30.837597  115497 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 20:00:30.837675  115497 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 20:00:30.837684  115497 kubeadm.go:322] 
	I1206 20:00:30.837760  115497 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1206 20:00:30.837770  115497 kubeadm.go:322] 
	I1206 20:00:30.837826  115497 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1206 20:00:30.837833  115497 kubeadm.go:322] 
	I1206 20:00:30.837899  115497 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1206 20:00:30.838016  115497 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 20:00:30.838114  115497 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 20:00:30.838124  115497 kubeadm.go:322] 
	I1206 20:00:30.838224  115497 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1206 20:00:30.838316  115497 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1206 20:00:30.838333  115497 kubeadm.go:322] 
	I1206 20:00:30.838409  115497 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token gsw79m.btql0t11yc11efah \
	I1206 20:00:30.838522  115497 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3173d0205a58c67077c3594ee458dc14bc41fcece32682cbfd9ea1126e12b817 \
	I1206 20:00:30.838559  115497 kubeadm.go:322] 	--control-plane 
	I1206 20:00:30.838568  115497 kubeadm.go:322] 
	I1206 20:00:30.838686  115497 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1206 20:00:30.838699  115497 kubeadm.go:322] 
	I1206 20:00:30.838805  115497 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token gsw79m.btql0t11yc11efah \
	I1206 20:00:30.838952  115497 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3173d0205a58c67077c3594ee458dc14bc41fcece32682cbfd9ea1126e12b817 
	I1206 20:00:30.839686  115497 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 20:00:30.839714  115497 cni.go:84] Creating CNI manager for ""
	I1206 20:00:30.839727  115497 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 20:00:30.841824  115497 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1206 20:00:27.540848  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:30.038457  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:30.843246  115497 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1206 20:00:30.916583  115497 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1206 20:00:30.974088  115497 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 20:00:30.974183  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:30.974183  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=31a3600ce72029d920a55140bbc6d0705e357503 minikube.k8s.io/name=default-k8s-diff-port-380424 minikube.k8s.io/updated_at=2023_12_06T20_00_30_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:31.400910  115497 ops.go:34] apiserver oom_adj: -16
	I1206 20:00:31.401056  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:31.320362  115217 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (6.630947418s)
	I1206 20:00:31.320445  115217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 20:00:31.349765  115217 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 20:00:31.369412  115217 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 20:00:31.381350  115217 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 20:00:31.381410  115217 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1206 20:00:31.626397  115217 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 20:00:32.039425  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:34.041934  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:31.516285  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:32.139221  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:32.639059  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:33.139995  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:33.639038  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:34.139842  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:34.640037  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:35.139893  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:35.639961  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:36.139749  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:38.383787  115591 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (15.113041618s)
	I1206 20:00:38.383859  115591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 20:00:38.397718  115591 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 20:00:38.406748  115591 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 20:00:38.415574  115591 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 20:00:38.415633  115591 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1206 20:00:38.485595  115591 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1206 20:00:38.485781  115591 kubeadm.go:322] [preflight] Running pre-flight checks
	I1206 20:00:38.659892  115591 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 20:00:38.660073  115591 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 20:00:38.660209  115591 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1206 20:00:38.939756  115591 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 20:00:38.941971  115591 out.go:204]   - Generating certificates and keys ...
	I1206 20:00:38.942103  115591 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1206 20:00:38.942200  115591 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1206 20:00:38.942296  115591 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 20:00:38.942708  115591 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1206 20:00:38.943817  115591 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 20:00:38.944130  115591 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1206 20:00:38.944894  115591 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1206 20:00:38.945607  115591 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1206 20:00:38.946355  115591 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 20:00:38.947015  115591 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 20:00:38.947720  115591 kubeadm.go:322] [certs] Using the existing "sa" key
	I1206 20:00:38.947795  115591 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 20:00:39.140045  115591 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 20:00:39.300047  115591 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 20:00:39.418439  115591 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 20:00:40.060938  115591 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 20:00:40.061616  115591 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 20:00:40.064208  115591 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 20:00:36.042049  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:38.540429  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:36.639372  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:37.139213  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:37.639506  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:38.139159  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:38.639007  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:39.139972  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:39.639969  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:40.139910  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:40.639836  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:41.139009  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:41.639153  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:42.139055  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:42.639853  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:43.139934  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:43.639741  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:44.139776  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:44.279581  115497 kubeadm.go:1088] duration metric: took 13.305461955s to wait for elevateKubeSystemPrivileges.
	I1206 20:00:44.279625  115497 kubeadm.go:406] StartCluster complete in 5m13.04588426s
	I1206 20:00:44.279660  115497 settings.go:142] acquiring lock: {Name:mkfeb988d43ca5824ac2b3af603600358ae0dd6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 20:00:44.279765  115497 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17740-63652/kubeconfig
	I1206 20:00:44.282748  115497 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-63652/kubeconfig: {Name:mkb891a2b2c86b4a1b0f4bb8fd4e51233eb9c683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 20:00:44.285263  115497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 20:00:44.285351  115497 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1206 20:00:44.285434  115497 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-380424"
	I1206 20:00:44.285459  115497 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-380424"
	W1206 20:00:44.285471  115497 addons.go:240] addon storage-provisioner should already be in state true
	I1206 20:00:44.285478  115497 config.go:182] Loaded profile config "default-k8s-diff-port-380424": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 20:00:44.285531  115497 host.go:66] Checking if "default-k8s-diff-port-380424" exists ...
	I1206 20:00:44.285542  115497 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-380424"
	I1206 20:00:44.285561  115497 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-380424"
	I1206 20:00:44.285719  115497 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-380424"
	I1206 20:00:44.285738  115497 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-380424"
	W1206 20:00:44.285747  115497 addons.go:240] addon metrics-server should already be in state true
	I1206 20:00:44.285797  115497 host.go:66] Checking if "default-k8s-diff-port-380424" exists ...
	I1206 20:00:44.285998  115497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:00:44.285998  115497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:00:44.286023  115497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:00:44.286026  115497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:00:44.286167  115497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:00:44.286190  115497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:00:44.306223  115497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41495
	I1206 20:00:44.306441  115497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39661
	I1206 20:00:44.307505  115497 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:00:44.307637  115497 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:00:44.308463  115497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41881
	I1206 20:00:44.308651  115497 main.go:141] libmachine: Using API Version  1
	I1206 20:00:44.308672  115497 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:00:44.309154  115497 main.go:141] libmachine: Using API Version  1
	I1206 20:00:44.309173  115497 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:00:44.309295  115497 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:00:44.309539  115497 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:00:44.310150  115497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:00:44.310183  115497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:00:44.310431  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetState
	I1206 20:00:44.312432  115497 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:00:44.313004  115497 main.go:141] libmachine: Using API Version  1
	I1206 20:00:44.313020  115497 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:00:44.315047  115497 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-380424"
	W1206 20:00:44.315065  115497 addons.go:240] addon default-storageclass should already be in state true
	I1206 20:00:44.315094  115497 host.go:66] Checking if "default-k8s-diff-port-380424" exists ...
	I1206 20:00:44.315499  115497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:00:44.315523  115497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:00:44.316248  115497 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:00:44.316893  115497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:00:44.316920  115497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:00:44.335555  115497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43199
	I1206 20:00:44.335908  115497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45127
	I1206 20:00:44.336636  115497 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:00:44.336749  115497 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:00:44.337379  115497 main.go:141] libmachine: Using API Version  1
	I1206 20:00:44.337404  115497 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:00:44.337791  115497 main.go:141] libmachine: Using API Version  1
	I1206 20:00:44.337818  115497 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:00:44.337895  115497 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:00:44.338474  115497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:00:44.338502  115497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:00:44.338944  115497 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-380424" context rescaled to 1 replicas
	I1206 20:00:44.338979  115497 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.22 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 20:00:44.340731  115497 out.go:177] * Verifying Kubernetes components...
	I1206 20:00:44.339696  115497 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:00:44.342367  115497 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 20:00:44.342537  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetState
	I1206 20:00:44.348774  115497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35461
	I1206 20:00:44.348808  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .DriverName
	I1206 20:00:44.350935  115497 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1206 20:00:44.349433  115497 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:00:44.353022  115497 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1206 20:00:44.353036  115497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1206 20:00:44.353060  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHHostname
	I1206 20:00:44.353493  115497 main.go:141] libmachine: Using API Version  1
	I1206 20:00:44.353512  115497 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:00:44.354850  115497 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:00:44.355732  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetState
	I1206 20:00:44.356894  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 20:00:44.359438  115497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38795
	I1206 20:00:44.360009  115497 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:00:44.360502  115497 main.go:141] libmachine: Using API Version  1
	I1206 20:00:44.360525  115497 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:00:44.360899  115497 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:00:44.361092  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetState
	I1206 20:00:44.362575  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 20:00:44.362605  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 20:00:44.362663  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHPort
	I1206 20:00:44.363067  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 20:00:44.363259  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHUsername
	I1206 20:00:44.363310  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .DriverName
	I1206 20:00:44.363544  115497 sshutil.go:53] new ssh client: &{IP:192.168.72.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/default-k8s-diff-port-380424/id_rsa Username:docker}
	I1206 20:00:44.363628  115497 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 20:00:44.363643  115497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 20:00:44.363663  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHHostname
	I1206 20:00:44.365352  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .DriverName
	I1206 20:00:44.367261  115497 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 20:00:40.066048  115591 out.go:204]   - Booting up control plane ...
	I1206 20:00:40.066207  115591 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 20:00:40.066320  115591 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 20:00:40.069077  115591 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 20:00:40.086558  115591 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 20:00:40.087856  115591 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 20:00:40.087969  115591 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1206 20:00:40.224157  115591 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1206 20:00:45.313051  115217 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1206 20:00:45.313125  115217 kubeadm.go:322] [preflight] Running pre-flight checks
	I1206 20:00:45.313226  115217 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 20:00:45.313355  115217 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 20:00:45.313466  115217 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1206 20:00:45.313591  115217 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 20:00:45.313697  115217 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 20:00:45.313767  115217 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1206 20:00:45.313844  115217 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 20:00:45.315759  115217 out.go:204]   - Generating certificates and keys ...
	I1206 20:00:45.315876  115217 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1206 20:00:45.315980  115217 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1206 20:00:45.316085  115217 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 20:00:45.316158  115217 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1206 20:00:45.316252  115217 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 20:00:45.316320  115217 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1206 20:00:45.316420  115217 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1206 20:00:45.316505  115217 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1206 20:00:45.316608  115217 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 20:00:45.316707  115217 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 20:00:45.316761  115217 kubeadm.go:322] [certs] Using the existing "sa" key
	I1206 20:00:45.316838  115217 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 20:00:45.316909  115217 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 20:00:45.316982  115217 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 20:00:45.317068  115217 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 20:00:45.317136  115217 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 20:00:45.317221  115217 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 20:00:45.318915  115217 out.go:204]   - Booting up control plane ...
	I1206 20:00:45.319042  115217 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 20:00:45.319145  115217 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 20:00:45.319253  115217 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 20:00:45.319367  115217 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 20:00:45.319568  115217 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1206 20:00:45.319690  115217 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.504419 seconds
	I1206 20:00:45.319828  115217 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 20:00:45.319978  115217 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 20:00:45.320042  115217 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 20:00:45.320189  115217 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-448851 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1206 20:00:45.320255  115217 kubeadm.go:322] [bootstrap-token] Using token: ms33mw.f0m2wm1rokle0nnu
	I1206 20:00:45.321976  115217 out.go:204]   - Configuring RBAC rules ...
	I1206 20:00:45.322105  115217 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 20:00:45.322229  115217 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 20:00:45.322373  115217 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 20:00:45.322532  115217 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 20:00:45.322673  115217 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 20:00:45.322759  115217 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1206 20:00:45.322845  115217 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1206 20:00:45.322857  115217 kubeadm.go:322] 
	I1206 20:00:45.322936  115217 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1206 20:00:45.322945  115217 kubeadm.go:322] 
	I1206 20:00:45.323055  115217 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1206 20:00:45.323071  115217 kubeadm.go:322] 
	I1206 20:00:45.323105  115217 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1206 20:00:45.323196  115217 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 20:00:45.323270  115217 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 20:00:45.323282  115217 kubeadm.go:322] 
	I1206 20:00:45.323373  115217 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1206 20:00:45.323477  115217 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 20:00:45.323575  115217 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 20:00:45.323590  115217 kubeadm.go:322] 
	I1206 20:00:45.323736  115217 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1206 20:00:45.323840  115217 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1206 20:00:45.323855  115217 kubeadm.go:322] 
	I1206 20:00:45.323984  115217 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ms33mw.f0m2wm1rokle0nnu \
	I1206 20:00:45.324187  115217 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:3173d0205a58c67077c3594ee458dc14bc41fcece32682cbfd9ea1126e12b817 \
	I1206 20:00:45.324248  115217 kubeadm.go:322]     --control-plane 	  
	I1206 20:00:45.324266  115217 kubeadm.go:322] 
	I1206 20:00:45.324386  115217 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1206 20:00:45.324397  115217 kubeadm.go:322] 
	I1206 20:00:45.324501  115217 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ms33mw.f0m2wm1rokle0nnu \
	I1206 20:00:45.324651  115217 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:3173d0205a58c67077c3594ee458dc14bc41fcece32682cbfd9ea1126e12b817 
	I1206 20:00:45.324664  115217 cni.go:84] Creating CNI manager for ""
	I1206 20:00:45.324675  115217 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 20:00:45.327284  115217 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1206 20:00:41.039495  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:43.041892  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:45.042744  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:44.369437  115497 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 20:00:44.369449  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 20:00:44.369458  115497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 20:00:44.369482  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHHostname
	I1206 20:00:44.373360  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 20:00:44.373394  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 20:00:44.373415  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 20:00:44.373465  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHPort
	I1206 20:00:44.373538  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 20:00:44.373554  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 20:00:44.373769  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 20:00:44.373830  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHPort
	I1206 20:00:44.374020  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 20:00:44.374077  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHUsername
	I1206 20:00:44.374221  115497 sshutil.go:53] new ssh client: &{IP:192.168.72.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/default-k8s-diff-port-380424/id_rsa Username:docker}
	I1206 20:00:44.374800  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHUsername
	I1206 20:00:44.375017  115497 sshutil.go:53] new ssh client: &{IP:192.168.72.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/default-k8s-diff-port-380424/id_rsa Username:docker}
	I1206 20:00:44.528574  115497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 20:00:44.553349  115497 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1206 20:00:44.553382  115497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1206 20:00:44.604100  115497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 20:00:44.605360  115497 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-380424" to be "Ready" ...
	I1206 20:00:44.605799  115497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 20:00:44.610007  115497 node_ready.go:49] node "default-k8s-diff-port-380424" has status "Ready":"True"
	I1206 20:00:44.610039  115497 node_ready.go:38] duration metric: took 4.647914ms waiting for node "default-k8s-diff-port-380424" to be "Ready" ...
	I1206 20:00:44.610052  115497 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 20:00:44.622684  115497 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-x6p7t" in "kube-system" namespace to be "Ready" ...
	I1206 20:00:44.639914  115497 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1206 20:00:44.640005  115497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1206 20:00:44.710284  115497 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 20:00:44.710318  115497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1206 20:00:44.767014  115497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 20:00:46.656182  115497 pod_ready.go:102] pod "coredns-5dd5756b68-x6p7t" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:46.941717  115497 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.413097049s)
	I1206 20:00:46.941764  115497 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.33594011s)
	I1206 20:00:46.941787  115497 start.go:929] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1206 20:00:46.941793  115497 main.go:141] libmachine: Making call to close driver server
	I1206 20:00:46.941733  115497 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.337595925s)
	I1206 20:00:46.941808  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .Close
	I1206 20:00:46.941841  115497 main.go:141] libmachine: Making call to close driver server
	I1206 20:00:46.941863  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .Close
	I1206 20:00:46.942167  115497 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:00:46.942187  115497 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:00:46.942198  115497 main.go:141] libmachine: Making call to close driver server
	I1206 20:00:46.942207  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .Close
	I1206 20:00:46.943997  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | Closing plugin on server side
	I1206 20:00:46.944031  115497 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:00:46.944041  115497 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:00:46.944052  115497 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:00:46.944060  115497 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:00:46.944057  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | Closing plugin on server side
	I1206 20:00:46.944077  115497 main.go:141] libmachine: Making call to close driver server
	I1206 20:00:46.944088  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .Close
	I1206 20:00:46.944363  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | Closing plugin on server side
	I1206 20:00:46.944401  115497 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:00:46.944419  115497 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:00:46.984172  115497 main.go:141] libmachine: Making call to close driver server
	I1206 20:00:46.984206  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .Close
	I1206 20:00:46.984675  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | Closing plugin on server side
	I1206 20:00:46.984714  115497 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:00:46.984733  115497 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:00:47.345448  115497 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.5783821s)
	I1206 20:00:47.345552  115497 main.go:141] libmachine: Making call to close driver server
	I1206 20:00:47.345573  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .Close
	I1206 20:00:47.345987  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | Closing plugin on server side
	I1206 20:00:47.346033  115497 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:00:47.346046  115497 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:00:47.346056  115497 main.go:141] libmachine: Making call to close driver server
	I1206 20:00:47.346088  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .Close
	I1206 20:00:47.346359  115497 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:00:47.346380  115497 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:00:47.346392  115497 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-380424"
	I1206 20:00:47.346442  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | Closing plugin on server side
	I1206 20:00:47.348281  115497 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1206 20:00:45.328763  115217 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1206 20:00:45.342986  115217 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1206 20:00:45.373351  115217 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 20:00:45.373503  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:45.373559  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=31a3600ce72029d920a55140bbc6d0705e357503 minikube.k8s.io/name=old-k8s-version-448851 minikube.k8s.io/updated_at=2023_12_06T20_00_45_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:45.701779  115217 ops.go:34] apiserver oom_adj: -16
	I1206 20:00:45.701907  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:45.815705  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:46.445065  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:46.945361  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:47.444737  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:47.945540  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:49.228883  115591 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.004688 seconds
	I1206 20:00:49.229058  115591 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 20:00:49.258512  115591 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 20:00:49.793797  115591 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 20:00:49.794010  115591 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-209025 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1206 20:00:50.315415  115591 kubeadm.go:322] [bootstrap-token] Using token: j4xv0f.htia0y0wrnbqnji6
	I1206 20:00:47.349693  115497 addons.go:502] enable addons completed in 3.064343142s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1206 20:00:48.648085  115497 pod_ready.go:92] pod "coredns-5dd5756b68-x6p7t" in "kube-system" namespace has status "Ready":"True"
	I1206 20:00:48.648116  115497 pod_ready.go:81] duration metric: took 4.025396521s waiting for pod "coredns-5dd5756b68-x6p7t" in "kube-system" namespace to be "Ready" ...
	I1206 20:00:48.648132  115497 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 20:00:48.660202  115497 pod_ready.go:92] pod "etcd-default-k8s-diff-port-380424" in "kube-system" namespace has status "Ready":"True"
	I1206 20:00:48.660235  115497 pod_ready.go:81] duration metric: took 12.09317ms waiting for pod "etcd-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 20:00:48.660248  115497 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 20:00:48.666568  115497 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-380424" in "kube-system" namespace has status "Ready":"True"
	I1206 20:00:48.666666  115497 pod_ready.go:81] duration metric: took 6.407781ms waiting for pod "kube-apiserver-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 20:00:48.666694  115497 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 20:00:48.679566  115497 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-380424" in "kube-system" namespace has status "Ready":"True"
	I1206 20:00:48.679653  115497 pod_ready.go:81] duration metric: took 12.938485ms waiting for pod "kube-controller-manager-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 20:00:48.679675  115497 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-khh5n" in "kube-system" namespace to be "Ready" ...
	I1206 20:00:49.554241  115497 pod_ready.go:92] pod "kube-proxy-khh5n" in "kube-system" namespace has status "Ready":"True"
	I1206 20:00:49.554266  115497 pod_ready.go:81] duration metric: took 874.584613ms waiting for pod "kube-proxy-khh5n" in "kube-system" namespace to be "Ready" ...
	I1206 20:00:49.554275  115497 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 20:00:49.845110  115497 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-380424" in "kube-system" namespace has status "Ready":"True"
	I1206 20:00:49.845140  115497 pod_ready.go:81] duration metric: took 290.857787ms waiting for pod "kube-scheduler-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 20:00:49.845152  115497 pod_ready.go:38] duration metric: took 5.235087469s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 20:00:49.845172  115497 api_server.go:52] waiting for apiserver process to appear ...
	I1206 20:00:49.845251  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 20:00:49.861908  115497 api_server.go:72] duration metric: took 5.522870891s to wait for apiserver process to appear ...
	I1206 20:00:49.861943  115497 api_server.go:88] waiting for apiserver healthz status ...
	I1206 20:00:49.861965  115497 api_server.go:253] Checking apiserver healthz at https://192.168.72.22:8444/healthz ...
	I1206 20:00:49.868675  115497 api_server.go:279] https://192.168.72.22:8444/healthz returned 200:
	ok
	I1206 20:00:49.870214  115497 api_server.go:141] control plane version: v1.28.4
	I1206 20:00:49.870254  115497 api_server.go:131] duration metric: took 8.303187ms to wait for apiserver health ...
	I1206 20:00:49.870266  115497 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 20:00:50.047974  115497 system_pods.go:59] 8 kube-system pods found
	I1206 20:00:50.048004  115497 system_pods.go:61] "coredns-5dd5756b68-x6p7t" [de75d299-fede-4fe1-a748-31720acc76eb] Running
	I1206 20:00:50.048011  115497 system_pods.go:61] "etcd-default-k8s-diff-port-380424" [36170db0-a926-4c8d-8283-9af453167ee1] Running
	I1206 20:00:50.048018  115497 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-380424" [72412f12-9e20-4905-89ad-65c67a2e5a7b] Running
	I1206 20:00:50.048025  115497 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-380424" [04d32349-9a28-4270-bd15-2275e74b6713] Running
	I1206 20:00:50.048030  115497 system_pods.go:61] "kube-proxy-khh5n" [acac843d-9849-4bda-af66-2422b319665e] Running
	I1206 20:00:50.048036  115497 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-380424" [a5b9f2ed-8cb1-4912-af86-d231d9b275ba] Running
	I1206 20:00:50.048045  115497 system_pods.go:61] "metrics-server-57f55c9bc5-xpbtp" [280fb2bc-d8d8-4684-8be1-ec0ace47ef77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:00:50.048052  115497 system_pods.go:61] "storage-provisioner" [e1def8b1-c6bb-48df-b2f2-34867a409cb7] Running
	I1206 20:00:50.048063  115497 system_pods.go:74] duration metric: took 177.789423ms to wait for pod list to return data ...
	I1206 20:00:50.048073  115497 default_sa.go:34] waiting for default service account to be created ...
	I1206 20:00:50.246867  115497 default_sa.go:45] found service account: "default"
	I1206 20:00:50.246903  115497 default_sa.go:55] duration metric: took 198.823117ms for default service account to be created ...
	I1206 20:00:50.246914  115497 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 20:00:50.447688  115497 system_pods.go:86] 8 kube-system pods found
	I1206 20:00:50.447777  115497 system_pods.go:89] "coredns-5dd5756b68-x6p7t" [de75d299-fede-4fe1-a748-31720acc76eb] Running
	I1206 20:00:50.447798  115497 system_pods.go:89] "etcd-default-k8s-diff-port-380424" [36170db0-a926-4c8d-8283-9af453167ee1] Running
	I1206 20:00:50.447815  115497 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-380424" [72412f12-9e20-4905-89ad-65c67a2e5a7b] Running
	I1206 20:00:50.447846  115497 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-380424" [04d32349-9a28-4270-bd15-2275e74b6713] Running
	I1206 20:00:50.447870  115497 system_pods.go:89] "kube-proxy-khh5n" [acac843d-9849-4bda-af66-2422b319665e] Running
	I1206 20:00:50.447886  115497 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-380424" [a5b9f2ed-8cb1-4912-af86-d231d9b275ba] Running
	I1206 20:00:50.447904  115497 system_pods.go:89] "metrics-server-57f55c9bc5-xpbtp" [280fb2bc-d8d8-4684-8be1-ec0ace47ef77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:00:50.447920  115497 system_pods.go:89] "storage-provisioner" [e1def8b1-c6bb-48df-b2f2-34867a409cb7] Running
	I1206 20:00:50.447953  115497 system_pods.go:126] duration metric: took 201.030369ms to wait for k8s-apps to be running ...
	I1206 20:00:50.447978  115497 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 20:00:50.448057  115497 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 20:00:50.468801  115497 system_svc.go:56] duration metric: took 20.810606ms WaitForService to wait for kubelet.
	I1206 20:00:50.468837  115497 kubeadm.go:581] duration metric: took 6.129827661s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1206 20:00:50.468860  115497 node_conditions.go:102] verifying NodePressure condition ...
	I1206 20:00:50.646083  115497 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1206 20:00:50.646124  115497 node_conditions.go:123] node cpu capacity is 2
	I1206 20:00:50.646138  115497 node_conditions.go:105] duration metric: took 177.272089ms to run NodePressure ...
	I1206 20:00:50.646153  115497 start.go:228] waiting for startup goroutines ...
	I1206 20:00:50.646164  115497 start.go:233] waiting for cluster config update ...
	I1206 20:00:50.646184  115497 start.go:242] writing updated cluster config ...
	I1206 20:00:50.646551  115497 ssh_runner.go:195] Run: rm -f paused
	I1206 20:00:50.711246  115497 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1206 20:00:50.713989  115497 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-380424" cluster and "default" namespace by default
	I1206 20:00:50.317018  115591 out.go:204]   - Configuring RBAC rules ...
	I1206 20:00:50.317155  115591 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 20:00:50.325410  115591 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1206 20:00:50.335197  115591 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 20:00:50.339351  115591 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 20:00:50.343930  115591 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 20:00:50.352323  115591 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 20:00:50.375514  115591 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1206 20:00:50.703397  115591 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1206 20:00:50.753323  115591 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1206 20:00:50.753351  115591 kubeadm.go:322] 
	I1206 20:00:50.753419  115591 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1206 20:00:50.753430  115591 kubeadm.go:322] 
	I1206 20:00:50.753522  115591 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1206 20:00:50.753539  115591 kubeadm.go:322] 
	I1206 20:00:50.753570  115591 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1206 20:00:50.753642  115591 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 20:00:50.753706  115591 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 20:00:50.753717  115591 kubeadm.go:322] 
	I1206 20:00:50.753780  115591 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1206 20:00:50.753790  115591 kubeadm.go:322] 
	I1206 20:00:50.753847  115591 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1206 20:00:50.753862  115591 kubeadm.go:322] 
	I1206 20:00:50.753928  115591 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1206 20:00:50.754020  115591 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 20:00:50.754109  115591 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 20:00:50.754120  115591 kubeadm.go:322] 
	I1206 20:00:50.754221  115591 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1206 20:00:50.754317  115591 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1206 20:00:50.754327  115591 kubeadm.go:322] 
	I1206 20:00:50.754426  115591 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token j4xv0f.htia0y0wrnbqnji6 \
	I1206 20:00:50.754552  115591 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3173d0205a58c67077c3594ee458dc14bc41fcece32682cbfd9ea1126e12b817 \
	I1206 20:00:50.754583  115591 kubeadm.go:322] 	--control-plane 
	I1206 20:00:50.754593  115591 kubeadm.go:322] 
	I1206 20:00:50.754690  115591 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1206 20:00:50.754707  115591 kubeadm.go:322] 
	I1206 20:00:50.754802  115591 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token j4xv0f.htia0y0wrnbqnji6 \
	I1206 20:00:50.754931  115591 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3173d0205a58c67077c3594ee458dc14bc41fcece32682cbfd9ea1126e12b817 
	I1206 20:00:50.755776  115591 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 20:00:50.755809  115591 cni.go:84] Creating CNI manager for ""
	I1206 20:00:50.755820  115591 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 20:00:50.759045  115591 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1206 20:00:47.539932  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:50.039553  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:48.445172  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:48.944908  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:49.445418  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:49.944612  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:50.445278  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:50.944545  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:51.444775  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:51.945470  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:52.445365  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:52.944742  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:50.760722  115591 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1206 20:00:50.792095  115591 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1206 20:00:50.854264  115591 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 20:00:50.854443  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:50.854549  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=31a3600ce72029d920a55140bbc6d0705e357503 minikube.k8s.io/name=embed-certs-209025 minikube.k8s.io/updated_at=2023_12_06T20_00_50_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:50.894717  115591 ops.go:34] apiserver oom_adj: -16
	I1206 20:00:51.388829  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:51.515185  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:52.132878  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:52.633171  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:53.132766  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:53.632887  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:54.132824  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:52.044531  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:54.538924  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:53.444641  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:53.945468  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:54.444996  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:54.944687  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:55.444757  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:55.945342  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:56.445585  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:56.945489  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:57.445628  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:57.944895  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:54.632961  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:55.132361  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:55.632305  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:56.132439  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:56.632252  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:57.132956  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:57.633210  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:58.133090  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:58.632198  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:59.133167  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:58.445440  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:58.945554  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:59.445298  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:59.945574  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:01:00.179151  115217 kubeadm.go:1088] duration metric: took 14.805687634s to wait for elevateKubeSystemPrivileges.
	I1206 20:01:00.179185  115217 kubeadm.go:406] StartCluster complete in 5m46.007596294s
	I1206 20:01:00.179204  115217 settings.go:142] acquiring lock: {Name:mkfeb988d43ca5824ac2b3af603600358ae0dd6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 20:01:00.179291  115217 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17740-63652/kubeconfig
	I1206 20:01:00.181490  115217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-63652/kubeconfig: {Name:mkb891a2b2c86b4a1b0f4bb8fd4e51233eb9c683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 20:01:00.181810  115217 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 20:01:00.181933  115217 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1206 20:01:00.182031  115217 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-448851"
	I1206 20:01:00.182063  115217 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-448851"
	W1206 20:01:00.182071  115217 addons.go:240] addon storage-provisioner should already be in state true
	I1206 20:01:00.182126  115217 host.go:66] Checking if "old-k8s-version-448851" exists ...
	I1206 20:01:00.182126  115217 config.go:182] Loaded profile config "old-k8s-version-448851": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1206 20:01:00.182180  115217 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-448851"
	I1206 20:01:00.182198  115217 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-448851"
	I1206 20:01:00.182554  115217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:01:00.182572  115217 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-448851"
	I1206 20:01:00.182581  115217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:01:00.182591  115217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:01:00.182596  115217 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-448851"
	W1206 20:01:00.182606  115217 addons.go:240] addon metrics-server should already be in state true
	I1206 20:01:00.182613  115217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:01:00.182735  115217 host.go:66] Checking if "old-k8s-version-448851" exists ...
	I1206 20:01:00.183101  115217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:01:00.183146  115217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:01:00.201450  115217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38847
	I1206 20:01:00.203683  115217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39291
	I1206 20:01:00.203715  115217 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:01:00.203800  115217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40089
	I1206 20:01:00.204181  115217 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:01:00.204341  115217 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:01:00.204386  115217 main.go:141] libmachine: Using API Version  1
	I1206 20:01:00.204409  115217 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:01:00.204863  115217 main.go:141] libmachine: Using API Version  1
	I1206 20:01:00.204877  115217 main.go:141] libmachine: Using API Version  1
	I1206 20:01:00.204884  115217 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:01:00.204895  115217 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:01:00.204950  115217 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:01:00.205328  115217 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:01:00.205333  115217 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:01:00.205489  115217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:01:00.205520  115217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:01:00.205560  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetState
	I1206 20:01:00.205992  115217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:01:00.206064  115217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:01:00.209487  115217 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-448851"
	W1206 20:01:00.209512  115217 addons.go:240] addon default-storageclass should already be in state true
	I1206 20:01:00.209545  115217 host.go:66] Checking if "old-k8s-version-448851" exists ...
	I1206 20:01:00.209987  115217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:01:00.210033  115217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:01:00.227092  115217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42411
	I1206 20:01:00.227961  115217 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:01:00.228610  115217 main.go:141] libmachine: Using API Version  1
	I1206 20:01:00.228633  115217 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:01:00.229107  115217 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:01:00.229342  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetState
	I1206 20:01:00.230638  115217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42917
	I1206 20:01:00.231552  115217 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:01:00.231863  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .DriverName
	I1206 20:01:00.235076  115217 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 20:01:00.232196  115217 main.go:141] libmachine: Using API Version  1
	I1206 20:01:00.232926  115217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44245
	I1206 20:01:00.237258  115217 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:01:00.237284  115217 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 20:01:00.237310  115217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 20:01:00.237333  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHHostname
	I1206 20:01:00.237682  115217 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:01:00.238034  115217 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:01:00.238212  115217 main.go:141] libmachine: Using API Version  1
	I1206 20:01:00.238240  115217 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:01:00.238580  115217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:01:00.238612  115217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:01:00.238977  115217 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:01:00.239198  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetState
	I1206 20:01:00.240631  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .DriverName
	I1206 20:01:00.243107  115217 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1206 20:01:00.241155  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 20:01:00.241833  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHPort
	I1206 20:01:00.245218  115217 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1206 20:01:00.245244  115217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1206 20:01:00.245267  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHHostname
	I1206 20:01:00.245315  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 20:01:00.245333  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 20:01:00.245505  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 20:01:00.245639  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHUsername
	I1206 20:01:00.245737  115217 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/old-k8s-version-448851/id_rsa Username:docker}
	I1206 20:01:00.248492  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 20:01:00.249278  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 20:01:00.249313  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 20:01:00.249597  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHPort
	I1206 20:01:00.249811  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 20:01:00.249971  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHUsername
	I1206 20:01:00.250090  115217 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/old-k8s-version-448851/id_rsa Username:docker}
	I1206 20:01:00.259179  115217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41691
	I1206 20:01:00.259617  115217 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:01:00.260068  115217 main.go:141] libmachine: Using API Version  1
	I1206 20:01:00.260090  115217 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:01:00.260461  115217 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:01:00.260685  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetState
	I1206 20:01:00.262284  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .DriverName
	I1206 20:01:00.262586  115217 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 20:01:00.262604  115217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 20:01:00.262623  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHHostname
	I1206 20:01:00.265183  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 20:01:00.265643  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 20:01:00.265661  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 20:01:00.265890  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHPort
	I1206 20:01:00.266078  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 20:01:00.266240  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHUsername
	I1206 20:01:00.266941  115217 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/old-k8s-version-448851/id_rsa Username:docker}
	I1206 20:01:00.271403  115217 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-448851" context rescaled to 1 replicas
	I1206 20:01:00.271435  115217 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.33 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 20:01:00.273197  115217 out.go:177] * Verifying Kubernetes components...
	I1206 20:00:57.039307  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:59.039639  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:01:00.274454  115217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 20:01:00.597204  115217 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1206 20:01:00.597240  115217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1206 20:01:00.621632  115217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 20:01:00.623444  115217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 20:01:00.630185  115217 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-448851" to be "Ready" ...
	I1206 20:01:00.630280  115217 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 20:01:00.633576  115217 node_ready.go:49] node "old-k8s-version-448851" has status "Ready":"True"
	I1206 20:01:00.633603  115217 node_ready.go:38] duration metric: took 3.385927ms waiting for node "old-k8s-version-448851" to be "Ready" ...
	I1206 20:01:00.633616  115217 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 20:01:00.717216  115217 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1206 20:01:00.717273  115217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1206 20:01:00.735998  115217 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-2nncf" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:00.866186  115217 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 20:01:00.866218  115217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1206 20:01:01.066040  115217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 20:01:01.835164  115217 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.213479825s)
	I1206 20:01:01.835230  115217 main.go:141] libmachine: Making call to close driver server
	I1206 20:01:01.835243  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .Close
	I1206 20:01:01.835558  115217 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:01:01.835605  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | Closing plugin on server side
	I1206 20:01:01.835615  115217 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:01:01.835648  115217 main.go:141] libmachine: Making call to close driver server
	I1206 20:01:01.835663  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .Close
	I1206 20:01:01.835939  115217 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:01:01.835974  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | Closing plugin on server side
	I1206 20:01:01.835983  115217 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:01:01.872799  115217 main.go:141] libmachine: Making call to close driver server
	I1206 20:01:01.872835  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .Close
	I1206 20:01:01.873282  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | Closing plugin on server side
	I1206 20:01:01.873317  115217 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:01:01.873336  115217 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:01:02.258697  115217 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.635202106s)
	I1206 20:01:02.258754  115217 main.go:141] libmachine: Making call to close driver server
	I1206 20:01:02.258769  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .Close
	I1206 20:01:02.258773  115217 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.628450705s)
	I1206 20:01:02.258806  115217 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1206 20:01:02.259113  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | Closing plugin on server side
	I1206 20:01:02.260973  115217 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:01:02.261002  115217 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:01:02.261014  115217 main.go:141] libmachine: Making call to close driver server
	I1206 20:01:02.261025  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .Close
	I1206 20:01:02.261416  115217 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:01:02.261440  115217 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:01:02.261424  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | Closing plugin on server side
	I1206 20:01:02.375593  115217 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.309500554s)
	I1206 20:01:02.375659  115217 main.go:141] libmachine: Making call to close driver server
	I1206 20:01:02.375680  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .Close
	I1206 20:01:02.376064  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | Closing plugin on server side
	I1206 20:01:02.376155  115217 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:01:02.376168  115217 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:01:02.376185  115217 main.go:141] libmachine: Making call to close driver server
	I1206 20:01:02.376193  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .Close
	I1206 20:01:02.376522  115217 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:01:02.376532  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | Closing plugin on server side
	I1206 20:01:02.376543  115217 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:01:02.376559  115217 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-448851"
	I1206 20:01:02.378457  115217 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1206 20:01:02.380099  115217 addons.go:502] enable addons completed in 2.198162438s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1206 20:00:59.632971  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:01:00.133124  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:01:00.633148  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:01:01.132260  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:01:01.632323  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:01:02.132575  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:01:02.632268  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:01:03.132789  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:01:03.633155  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:01:04.132754  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:01:04.321130  115591 kubeadm.go:1088] duration metric: took 13.466729355s to wait for elevateKubeSystemPrivileges.
	I1206 20:01:04.321175  115591 kubeadm.go:406] StartCluster complete in 5m10.1110739s
	I1206 20:01:04.321200  115591 settings.go:142] acquiring lock: {Name:mkfeb988d43ca5824ac2b3af603600358ae0dd6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 20:01:04.321311  115591 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17740-63652/kubeconfig
	I1206 20:01:04.324158  115591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-63652/kubeconfig: {Name:mkb891a2b2c86b4a1b0f4bb8fd4e51233eb9c683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 20:01:04.324502  115591 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 20:01:04.324531  115591 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1206 20:01:04.324609  115591 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-209025"
	I1206 20:01:04.324633  115591 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-209025"
	W1206 20:01:04.324640  115591 addons.go:240] addon storage-provisioner should already be in state true
	I1206 20:01:04.324670  115591 addons.go:69] Setting default-storageclass=true in profile "embed-certs-209025"
	I1206 20:01:04.324699  115591 host.go:66] Checking if "embed-certs-209025" exists ...
	I1206 20:01:04.324702  115591 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-209025"
	I1206 20:01:04.324729  115591 config.go:182] Loaded profile config "embed-certs-209025": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 20:01:04.324799  115591 addons.go:69] Setting metrics-server=true in profile "embed-certs-209025"
	I1206 20:01:04.324813  115591 addons.go:231] Setting addon metrics-server=true in "embed-certs-209025"
	W1206 20:01:04.324820  115591 addons.go:240] addon metrics-server should already be in state true
	I1206 20:01:04.324858  115591 host.go:66] Checking if "embed-certs-209025" exists ...
	I1206 20:01:04.325100  115591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:01:04.325126  115591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:01:04.325127  115591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:01:04.325163  115591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:01:04.325191  115591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:01:04.325213  115591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:01:04.344127  115591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37257
	I1206 20:01:04.344361  115591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36921
	I1206 20:01:04.344866  115591 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:01:04.344978  115591 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:01:04.345615  115591 main.go:141] libmachine: Using API Version  1
	I1206 20:01:04.345635  115591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:01:04.345756  115591 main.go:141] libmachine: Using API Version  1
	I1206 20:01:04.345766  115591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:01:04.346201  115591 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:01:04.346772  115591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:01:04.346821  115591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:01:04.347367  115591 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:01:04.347741  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetState
	I1206 20:01:04.348264  115591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40295
	I1206 20:01:04.348754  115591 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:01:04.349655  115591 main.go:141] libmachine: Using API Version  1
	I1206 20:01:04.349676  115591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:01:04.350156  115591 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:01:04.352233  115591 addons.go:231] Setting addon default-storageclass=true in "embed-certs-209025"
	W1206 20:01:04.352257  115591 addons.go:240] addon default-storageclass should already be in state true
	I1206 20:01:04.352286  115591 host.go:66] Checking if "embed-certs-209025" exists ...
	I1206 20:01:04.352700  115591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:01:04.352734  115591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:01:04.353530  115591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:01:04.353563  115591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:01:04.365607  115591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40665
	I1206 20:01:04.366094  115591 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:01:04.366493  115591 main.go:141] libmachine: Using API Version  1
	I1206 20:01:04.366514  115591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:01:04.366780  115591 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:01:04.366908  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetState
	I1206 20:01:04.368611  115591 main.go:141] libmachine: (embed-certs-209025) Calling .DriverName
	I1206 20:01:04.370655  115591 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 20:01:04.372351  115591 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 20:01:04.372372  115591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33729
	I1206 20:01:04.372376  115591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 20:01:04.372402  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHHostname
	I1206 20:01:04.373021  115591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33983
	I1206 20:01:04.374446  115591 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:01:04.375104  115591 main.go:141] libmachine: Using API Version  1
	I1206 20:01:04.375126  115591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:01:04.375570  115591 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:01:04.375769  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetState
	I1206 20:01:04.376448  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 20:01:04.376851  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 20:01:04.376907  115591 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:01:04.377123  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 20:01:04.377377  115591 main.go:141] libmachine: (embed-certs-209025) Calling .DriverName
	I1206 20:01:04.377531  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHPort
	I1206 20:01:04.379514  115591 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1206 20:01:04.377862  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 20:01:04.378152  115591 main.go:141] libmachine: Using API Version  1
	I1206 20:01:04.381562  115591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:01:04.381682  115591 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1206 20:01:04.381700  115591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1206 20:01:04.381722  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHHostname
	I1206 20:01:04.382619  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHUsername
	I1206 20:01:04.382788  115591 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/embed-certs-209025/id_rsa Username:docker}
	I1206 20:01:04.383576  115591 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:01:04.384146  115591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:01:04.384176  115591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:01:04.386297  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 20:01:04.386684  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 20:01:04.386734  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 20:01:04.387477  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHPort
	I1206 20:01:04.387726  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 20:01:04.387913  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHUsername
	I1206 20:01:04.388055  115591 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/embed-certs-209025/id_rsa Username:docker}
	I1206 20:01:04.401629  115591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41467
	I1206 20:01:04.402214  115591 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:01:04.402804  115591 main.go:141] libmachine: Using API Version  1
	I1206 20:01:04.402826  115591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:01:04.403127  115591 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:01:04.403337  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetState
	I1206 20:01:04.405059  115591 main.go:141] libmachine: (embed-certs-209025) Calling .DriverName
	I1206 20:01:04.405404  115591 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 20:01:04.405427  115591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 20:01:04.405449  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHHostname
	I1206 20:01:04.408608  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 20:01:04.409145  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 20:01:04.409176  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 20:01:04.409443  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHPort
	I1206 20:01:04.409640  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 20:01:04.409860  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHUsername
	I1206 20:01:04.410016  115591 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/embed-certs-209025/id_rsa Username:docker}
	W1206 20:01:04.462788  115591 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "embed-certs-209025" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E1206 20:01:04.462843  115591 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I1206 20:01:04.462872  115591 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.164 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 20:01:04.464916  115591 out.go:177] * Verifying Kubernetes components...
	I1206 20:01:04.466388  115591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 20:01:01.039870  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:01:03.550944  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:01:05.231905  115078 pod_ready.go:81] duration metric: took 4m0.001038985s waiting for pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace to be "Ready" ...
	E1206 20:01:05.231950  115078 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1206 20:01:05.231962  115078 pod_ready.go:38] duration metric: took 4m4.801417566s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 20:01:05.231988  115078 api_server.go:52] waiting for apiserver process to appear ...
	I1206 20:01:05.232081  115078 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 20:01:05.232155  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 20:01:05.294538  115078 cri.go:89] found id: "f5b4ca951aec7e7912fa233772fa1e5a4cceffad95c297ea5b1b968ce835d2eb"
	I1206 20:01:05.294570  115078 cri.go:89] found id: ""
	I1206 20:01:05.294581  115078 logs.go:284] 1 containers: [f5b4ca951aec7e7912fa233772fa1e5a4cceffad95c297ea5b1b968ce835d2eb]
	I1206 20:01:05.294643  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:05.300221  115078 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 20:01:05.300300  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 20:01:05.359655  115078 cri.go:89] found id: "7633ca5afa8aeb44e4c450285edb3b4ca09bd881a87e959894d7740313a7d861"
	I1206 20:01:05.359685  115078 cri.go:89] found id: ""
	I1206 20:01:05.359696  115078 logs.go:284] 1 containers: [7633ca5afa8aeb44e4c450285edb3b4ca09bd881a87e959894d7740313a7d861]
	I1206 20:01:05.359759  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:05.364518  115078 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 20:01:05.364600  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 20:01:05.408448  115078 cri.go:89] found id: "93aee471c37fc8008cd5a0ce741944f249d9cbf7a5cbb62828369850236b0f07"
	I1206 20:01:05.408490  115078 cri.go:89] found id: ""
	I1206 20:01:05.408510  115078 logs.go:284] 1 containers: [93aee471c37fc8008cd5a0ce741944f249d9cbf7a5cbb62828369850236b0f07]
	I1206 20:01:05.408575  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:05.413345  115078 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 20:01:05.413428  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 20:01:05.462932  115078 cri.go:89] found id: "c00065611a1f7003ff8edd2ac629e3c6bbdfa4e1167b1dfd412ee16e5a9d3dcd"
	I1206 20:01:05.462960  115078 cri.go:89] found id: ""
	I1206 20:01:05.462971  115078 logs.go:284] 1 containers: [c00065611a1f7003ff8edd2ac629e3c6bbdfa4e1167b1dfd412ee16e5a9d3dcd]
	I1206 20:01:05.463034  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:05.468632  115078 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 20:01:05.468713  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 20:01:05.519690  115078 cri.go:89] found id: "0da9ad5d9749cba59bd11e3aa5dd332ad76b5bdd0fcde476ce333d4069f0e259"
	I1206 20:01:05.519720  115078 cri.go:89] found id: ""
	I1206 20:01:05.519731  115078 logs.go:284] 1 containers: [0da9ad5d9749cba59bd11e3aa5dd332ad76b5bdd0fcde476ce333d4069f0e259]
	I1206 20:01:05.519789  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:05.525847  115078 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 20:01:05.525933  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 20:01:05.580475  115078 cri.go:89] found id: "43c8e91cea581258f9bf13a71487b9ffadc02ac982f7ff08fa649092e171fa87"
	I1206 20:01:05.580537  115078 cri.go:89] found id: ""
	I1206 20:01:05.580550  115078 logs.go:284] 1 containers: [43c8e91cea581258f9bf13a71487b9ffadc02ac982f7ff08fa649092e171fa87]
	I1206 20:01:05.580623  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:05.585602  115078 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 20:01:05.585688  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 20:01:05.636350  115078 cri.go:89] found id: ""
	I1206 20:01:05.636383  115078 logs.go:284] 0 containers: []
	W1206 20:01:05.636394  115078 logs.go:286] No container was found matching "kindnet"
	I1206 20:01:05.636403  115078 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 20:01:05.636469  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 20:01:05.678819  115078 cri.go:89] found id: "ec1601a49c79cb7e81473ecbb6e4506b82fc3b4942e91392653a6991579c5617"
	I1206 20:01:05.678846  115078 cri.go:89] found id: "d07b3a050ef1964c57b85cc91a3d17646fef6d2298b91d72c007f432c51942c9"
	I1206 20:01:05.678853  115078 cri.go:89] found id: ""
	I1206 20:01:05.678863  115078 logs.go:284] 2 containers: [ec1601a49c79cb7e81473ecbb6e4506b82fc3b4942e91392653a6991579c5617 d07b3a050ef1964c57b85cc91a3d17646fef6d2298b91d72c007f432c51942c9]
	I1206 20:01:05.678929  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:05.683845  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:05.689989  115078 logs.go:123] Gathering logs for kube-scheduler [c00065611a1f7003ff8edd2ac629e3c6bbdfa4e1167b1dfd412ee16e5a9d3dcd] ...
	I1206 20:01:05.690021  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c00065611a1f7003ff8edd2ac629e3c6bbdfa4e1167b1dfd412ee16e5a9d3dcd"
	I1206 20:01:05.745510  115078 logs.go:123] Gathering logs for CRI-O ...
	I1206 20:01:05.745554  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 20:01:04.580869  115591 node_ready.go:35] waiting up to 6m0s for node "embed-certs-209025" to be "Ready" ...
	I1206 20:01:04.580933  115591 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 20:01:04.585219  115591 node_ready.go:49] node "embed-certs-209025" has status "Ready":"True"
	I1206 20:01:04.585267  115591 node_ready.go:38] duration metric: took 4.363508ms waiting for node "embed-certs-209025" to be "Ready" ...
	I1206 20:01:04.585281  115591 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 20:01:04.595166  115591 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-57z8q" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:04.611829  115591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 20:01:04.622127  115591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 20:01:04.628233  115591 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1206 20:01:04.628260  115591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1206 20:01:04.706473  115591 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1206 20:01:04.706498  115591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1206 20:01:04.790827  115591 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 20:01:04.790868  115591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1206 20:01:04.840367  115591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 20:01:06.312054  115591 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.73108071s)
	I1206 20:01:06.312092  115591 start.go:929] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1206 20:01:06.312099  115591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.700233834s)
	I1206 20:01:06.312147  115591 main.go:141] libmachine: Making call to close driver server
	I1206 20:01:06.312162  115591 main.go:141] libmachine: (embed-certs-209025) Calling .Close
	I1206 20:01:06.312503  115591 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:01:06.312519  115591 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:01:06.312531  115591 main.go:141] libmachine: Making call to close driver server
	I1206 20:01:06.312541  115591 main.go:141] libmachine: (embed-certs-209025) Calling .Close
	I1206 20:01:06.312895  115591 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:01:06.312985  115591 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:01:06.312952  115591 main.go:141] libmachine: (embed-certs-209025) DBG | Closing plugin on server side
	I1206 20:01:06.334314  115591 main.go:141] libmachine: Making call to close driver server
	I1206 20:01:06.334343  115591 main.go:141] libmachine: (embed-certs-209025) Calling .Close
	I1206 20:01:06.334719  115591 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:01:06.334742  115591 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:01:06.677046  115591 pod_ready.go:102] pod "coredns-5dd5756b68-57z8q" in "kube-system" namespace has status "Ready":"False"
	I1206 20:01:07.176051  115591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.553877678s)
	I1206 20:01:07.176112  115591 main.go:141] libmachine: Making call to close driver server
	I1206 20:01:07.176124  115591 main.go:141] libmachine: (embed-certs-209025) Calling .Close
	I1206 20:01:07.176520  115591 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:01:07.176551  115591 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:01:07.176570  115591 main.go:141] libmachine: Making call to close driver server
	I1206 20:01:07.176584  115591 main.go:141] libmachine: (embed-certs-209025) Calling .Close
	I1206 20:01:07.176859  115591 main.go:141] libmachine: (embed-certs-209025) DBG | Closing plugin on server side
	I1206 20:01:07.176852  115591 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:01:07.176884  115591 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:01:07.287377  115591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.446934189s)
	I1206 20:01:07.287525  115591 main.go:141] libmachine: Making call to close driver server
	I1206 20:01:07.287586  115591 main.go:141] libmachine: (embed-certs-209025) Calling .Close
	I1206 20:01:07.288055  115591 main.go:141] libmachine: (embed-certs-209025) DBG | Closing plugin on server side
	I1206 20:01:07.288055  115591 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:01:07.288082  115591 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:01:07.288096  115591 main.go:141] libmachine: Making call to close driver server
	I1206 20:01:07.288105  115591 main.go:141] libmachine: (embed-certs-209025) Calling .Close
	I1206 20:01:07.288358  115591 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:01:07.288372  115591 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:01:07.288384  115591 addons.go:467] Verifying addon metrics-server=true in "embed-certs-209025"
	I1206 20:01:07.291120  115591 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1206 20:01:03.100131  115217 pod_ready.go:102] pod "coredns-5644d7b6d9-2nncf" in "kube-system" namespace has status "Ready":"False"
	I1206 20:01:05.107571  115217 pod_ready.go:102] pod "coredns-5644d7b6d9-2nncf" in "kube-system" namespace has status "Ready":"False"
	I1206 20:01:07.599078  115217 pod_ready.go:102] pod "coredns-5644d7b6d9-2nncf" in "kube-system" namespace has status "Ready":"False"
	I1206 20:01:07.292151  115591 addons.go:502] enable addons completed in 2.967619291s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1206 20:01:09.122709  115591 pod_ready.go:102] pod "coredns-5dd5756b68-57z8q" in "kube-system" namespace has status "Ready":"False"
	I1206 20:01:06.258156  115078 logs.go:123] Gathering logs for container status ...
	I1206 20:01:06.258193  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 20:01:06.321049  115078 logs.go:123] Gathering logs for kubelet ...
	I1206 20:01:06.321084  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 20:01:06.376243  115078 logs.go:123] Gathering logs for etcd [7633ca5afa8aeb44e4c450285edb3b4ca09bd881a87e959894d7740313a7d861] ...
	I1206 20:01:06.376281  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7633ca5afa8aeb44e4c450285edb3b4ca09bd881a87e959894d7740313a7d861"
	I1206 20:01:06.441701  115078 logs.go:123] Gathering logs for coredns [93aee471c37fc8008cd5a0ce741944f249d9cbf7a5cbb62828369850236b0f07] ...
	I1206 20:01:06.441742  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93aee471c37fc8008cd5a0ce741944f249d9cbf7a5cbb62828369850236b0f07"
	I1206 20:01:06.493399  115078 logs.go:123] Gathering logs for kube-proxy [0da9ad5d9749cba59bd11e3aa5dd332ad76b5bdd0fcde476ce333d4069f0e259] ...
	I1206 20:01:06.493440  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0da9ad5d9749cba59bd11e3aa5dd332ad76b5bdd0fcde476ce333d4069f0e259"
	I1206 20:01:06.545681  115078 logs.go:123] Gathering logs for storage-provisioner [d07b3a050ef1964c57b85cc91a3d17646fef6d2298b91d72c007f432c51942c9] ...
	I1206 20:01:06.545717  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d07b3a050ef1964c57b85cc91a3d17646fef6d2298b91d72c007f432c51942c9"
	I1206 20:01:06.602830  115078 logs.go:123] Gathering logs for dmesg ...
	I1206 20:01:06.602864  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 20:01:06.618874  115078 logs.go:123] Gathering logs for kube-controller-manager [43c8e91cea581258f9bf13a71487b9ffadc02ac982f7ff08fa649092e171fa87] ...
	I1206 20:01:06.618903  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 43c8e91cea581258f9bf13a71487b9ffadc02ac982f7ff08fa649092e171fa87"
	I1206 20:01:06.694329  115078 logs.go:123] Gathering logs for storage-provisioner [ec1601a49c79cb7e81473ecbb6e4506b82fc3b4942e91392653a6991579c5617] ...
	I1206 20:01:06.694375  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec1601a49c79cb7e81473ecbb6e4506b82fc3b4942e91392653a6991579c5617"
	I1206 20:01:06.748217  115078 logs.go:123] Gathering logs for describe nodes ...
	I1206 20:01:06.748255  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1206 20:01:06.933616  115078 logs.go:123] Gathering logs for kube-apiserver [f5b4ca951aec7e7912fa233772fa1e5a4cceffad95c297ea5b1b968ce835d2eb] ...
	I1206 20:01:06.933655  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5b4ca951aec7e7912fa233772fa1e5a4cceffad95c297ea5b1b968ce835d2eb"
	I1206 20:01:09.511340  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 20:01:09.530228  115078 api_server.go:72] duration metric: took 4m16.464196787s to wait for apiserver process to appear ...
	I1206 20:01:09.530254  115078 api_server.go:88] waiting for apiserver healthz status ...
	I1206 20:01:09.530295  115078 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 20:01:09.530357  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 20:01:09.574265  115078 cri.go:89] found id: "f5b4ca951aec7e7912fa233772fa1e5a4cceffad95c297ea5b1b968ce835d2eb"
	I1206 20:01:09.574301  115078 cri.go:89] found id: ""
	I1206 20:01:09.574313  115078 logs.go:284] 1 containers: [f5b4ca951aec7e7912fa233772fa1e5a4cceffad95c297ea5b1b968ce835d2eb]
	I1206 20:01:09.574377  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:09.579240  115078 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 20:01:09.579310  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 20:01:09.622512  115078 cri.go:89] found id: "7633ca5afa8aeb44e4c450285edb3b4ca09bd881a87e959894d7740313a7d861"
	I1206 20:01:09.622540  115078 cri.go:89] found id: ""
	I1206 20:01:09.622551  115078 logs.go:284] 1 containers: [7633ca5afa8aeb44e4c450285edb3b4ca09bd881a87e959894d7740313a7d861]
	I1206 20:01:09.622619  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:09.627770  115078 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 20:01:09.627847  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 20:01:09.675976  115078 cri.go:89] found id: "93aee471c37fc8008cd5a0ce741944f249d9cbf7a5cbb62828369850236b0f07"
	I1206 20:01:09.676007  115078 cri.go:89] found id: ""
	I1206 20:01:09.676018  115078 logs.go:284] 1 containers: [93aee471c37fc8008cd5a0ce741944f249d9cbf7a5cbb62828369850236b0f07]
	I1206 20:01:09.676082  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:09.680750  115078 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 20:01:09.680824  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 20:01:09.721081  115078 cri.go:89] found id: "c00065611a1f7003ff8edd2ac629e3c6bbdfa4e1167b1dfd412ee16e5a9d3dcd"
	I1206 20:01:09.721108  115078 cri.go:89] found id: ""
	I1206 20:01:09.721119  115078 logs.go:284] 1 containers: [c00065611a1f7003ff8edd2ac629e3c6bbdfa4e1167b1dfd412ee16e5a9d3dcd]
	I1206 20:01:09.721181  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:09.725501  115078 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 20:01:09.725568  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 20:01:09.777674  115078 cri.go:89] found id: "0da9ad5d9749cba59bd11e3aa5dd332ad76b5bdd0fcde476ce333d4069f0e259"
	I1206 20:01:09.777700  115078 cri.go:89] found id: ""
	I1206 20:01:09.777709  115078 logs.go:284] 1 containers: [0da9ad5d9749cba59bd11e3aa5dd332ad76b5bdd0fcde476ce333d4069f0e259]
	I1206 20:01:09.777767  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:09.782475  115078 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 20:01:09.782558  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 20:01:09.833889  115078 cri.go:89] found id: "43c8e91cea581258f9bf13a71487b9ffadc02ac982f7ff08fa649092e171fa87"
	I1206 20:01:09.833916  115078 cri.go:89] found id: ""
	I1206 20:01:09.833926  115078 logs.go:284] 1 containers: [43c8e91cea581258f9bf13a71487b9ffadc02ac982f7ff08fa649092e171fa87]
	I1206 20:01:09.833985  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:09.838897  115078 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 20:01:09.838977  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 20:01:09.880892  115078 cri.go:89] found id: ""
	I1206 20:01:09.880923  115078 logs.go:284] 0 containers: []
	W1206 20:01:09.880934  115078 logs.go:286] No container was found matching "kindnet"
	I1206 20:01:09.880943  115078 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 20:01:09.881011  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 20:01:09.924025  115078 cri.go:89] found id: "ec1601a49c79cb7e81473ecbb6e4506b82fc3b4942e91392653a6991579c5617"
	I1206 20:01:09.924058  115078 cri.go:89] found id: "d07b3a050ef1964c57b85cc91a3d17646fef6d2298b91d72c007f432c51942c9"
	I1206 20:01:09.924065  115078 cri.go:89] found id: ""
	I1206 20:01:09.924075  115078 logs.go:284] 2 containers: [ec1601a49c79cb7e81473ecbb6e4506b82fc3b4942e91392653a6991579c5617 d07b3a050ef1964c57b85cc91a3d17646fef6d2298b91d72c007f432c51942c9]
	I1206 20:01:09.924142  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:09.928667  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:09.933112  115078 logs.go:123] Gathering logs for dmesg ...
	I1206 20:01:09.933134  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 20:01:09.949212  115078 logs.go:123] Gathering logs for etcd [7633ca5afa8aeb44e4c450285edb3b4ca09bd881a87e959894d7740313a7d861] ...
	I1206 20:01:09.949254  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7633ca5afa8aeb44e4c450285edb3b4ca09bd881a87e959894d7740313a7d861"
	I1206 20:01:09.996227  115078 logs.go:123] Gathering logs for container status ...
	I1206 20:01:09.996261  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 20:01:10.046607  115078 logs.go:123] Gathering logs for kubelet ...
	I1206 20:01:10.046645  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 20:01:10.102171  115078 logs.go:123] Gathering logs for kube-controller-manager [43c8e91cea581258f9bf13a71487b9ffadc02ac982f7ff08fa649092e171fa87] ...
	I1206 20:01:10.102214  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 43c8e91cea581258f9bf13a71487b9ffadc02ac982f7ff08fa649092e171fa87"
	I1206 20:01:10.160600  115078 logs.go:123] Gathering logs for storage-provisioner [ec1601a49c79cb7e81473ecbb6e4506b82fc3b4942e91392653a6991579c5617] ...
	I1206 20:01:10.160641  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec1601a49c79cb7e81473ecbb6e4506b82fc3b4942e91392653a6991579c5617"
	I1206 20:01:10.203673  115078 logs.go:123] Gathering logs for CRI-O ...
	I1206 20:01:10.203709  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 20:01:10.681783  115078 logs.go:123] Gathering logs for describe nodes ...
	I1206 20:01:10.681824  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1206 20:01:10.813061  115078 logs.go:123] Gathering logs for kube-proxy [0da9ad5d9749cba59bd11e3aa5dd332ad76b5bdd0fcde476ce333d4069f0e259] ...
	I1206 20:01:10.813102  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0da9ad5d9749cba59bd11e3aa5dd332ad76b5bdd0fcde476ce333d4069f0e259"
	I1206 20:01:10.857895  115078 logs.go:123] Gathering logs for storage-provisioner [d07b3a050ef1964c57b85cc91a3d17646fef6d2298b91d72c007f432c51942c9] ...
	I1206 20:01:10.857930  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d07b3a050ef1964c57b85cc91a3d17646fef6d2298b91d72c007f432c51942c9"
	I1206 20:01:10.904589  115078 logs.go:123] Gathering logs for kube-apiserver [f5b4ca951aec7e7912fa233772fa1e5a4cceffad95c297ea5b1b968ce835d2eb] ...
	I1206 20:01:10.904625  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5b4ca951aec7e7912fa233772fa1e5a4cceffad95c297ea5b1b968ce835d2eb"
	I1206 20:01:10.957570  115078 logs.go:123] Gathering logs for kube-scheduler [c00065611a1f7003ff8edd2ac629e3c6bbdfa4e1167b1dfd412ee16e5a9d3dcd] ...
	I1206 20:01:10.957608  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c00065611a1f7003ff8edd2ac629e3c6bbdfa4e1167b1dfd412ee16e5a9d3dcd"
	I1206 20:01:09.624997  115591 pod_ready.go:92] pod "coredns-5dd5756b68-57z8q" in "kube-system" namespace has status "Ready":"True"
	I1206 20:01:09.625025  115591 pod_ready.go:81] duration metric: took 5.029829059s waiting for pod "coredns-5dd5756b68-57z8q" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:09.625038  115591 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-8lsns" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:09.632534  115591 pod_ready.go:92] pod "coredns-5dd5756b68-8lsns" in "kube-system" namespace has status "Ready":"True"
	I1206 20:01:09.632561  115591 pod_ready.go:81] duration metric: took 7.514952ms waiting for pod "coredns-5dd5756b68-8lsns" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:09.632574  115591 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:09.642077  115591 pod_ready.go:92] pod "etcd-embed-certs-209025" in "kube-system" namespace has status "Ready":"True"
	I1206 20:01:09.642107  115591 pod_ready.go:81] duration metric: took 9.52505ms waiting for pod "etcd-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:09.642121  115591 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:09.648636  115591 pod_ready.go:92] pod "kube-apiserver-embed-certs-209025" in "kube-system" namespace has status "Ready":"True"
	I1206 20:01:09.648658  115591 pod_ready.go:81] duration metric: took 6.530394ms waiting for pod "kube-apiserver-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:09.648667  115591 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:09.656534  115591 pod_ready.go:92] pod "kube-controller-manager-embed-certs-209025" in "kube-system" namespace has status "Ready":"True"
	I1206 20:01:09.656561  115591 pod_ready.go:81] duration metric: took 7.887248ms waiting for pod "kube-controller-manager-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:09.656573  115591 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nf2cw" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:10.019281  115591 pod_ready.go:92] pod "kube-proxy-nf2cw" in "kube-system" namespace has status "Ready":"True"
	I1206 20:01:10.019310  115591 pod_ready.go:81] duration metric: took 362.727602ms waiting for pod "kube-proxy-nf2cw" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:10.019323  115591 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:10.419938  115591 pod_ready.go:92] pod "kube-scheduler-embed-certs-209025" in "kube-system" namespace has status "Ready":"True"
	I1206 20:01:10.419971  115591 pod_ready.go:81] duration metric: took 400.640145ms waiting for pod "kube-scheduler-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:10.419982  115591 pod_ready.go:38] duration metric: took 5.834689614s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 20:01:10.420000  115591 api_server.go:52] waiting for apiserver process to appear ...
	I1206 20:01:10.420062  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 20:01:10.436691  115591 api_server.go:72] duration metric: took 5.973781556s to wait for apiserver process to appear ...
	I1206 20:01:10.436723  115591 api_server.go:88] waiting for apiserver healthz status ...
	I1206 20:01:10.436746  115591 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8443/healthz ...
	I1206 20:01:10.442876  115591 api_server.go:279] https://192.168.50.164:8443/healthz returned 200:
	ok
	I1206 20:01:10.444774  115591 api_server.go:141] control plane version: v1.28.4
	I1206 20:01:10.444798  115591 api_server.go:131] duration metric: took 8.067787ms to wait for apiserver health ...
	I1206 20:01:10.444808  115591 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 20:01:10.624219  115591 system_pods.go:59] 9 kube-system pods found
	I1206 20:01:10.624251  115591 system_pods.go:61] "coredns-5dd5756b68-57z8q" [24c81a49-d80e-47df-86d2-0056ccc25858] Running
	I1206 20:01:10.624256  115591 system_pods.go:61] "coredns-5dd5756b68-8lsns" [14c5f16e-0c30-4602-b772-c6e0c8a577a8] Running
	I1206 20:01:10.624260  115591 system_pods.go:61] "etcd-embed-certs-209025" [e352dba2-c22b-4b21-9cb7-d641d29307a0] Running
	I1206 20:01:10.624264  115591 system_pods.go:61] "kube-apiserver-embed-certs-209025" [b4bfe0d1-0f1f-4e5e-96a4-94ec19cc1ab4] Running
	I1206 20:01:10.624268  115591 system_pods.go:61] "kube-controller-manager-embed-certs-209025" [1e9819fc-0187-4410-97f5-a517fb6b6595] Running
	I1206 20:01:10.624272  115591 system_pods.go:61] "kube-proxy-nf2cw" [5e49b3f8-7eee-4c04-ae22-75ccd216bb27] Running
	I1206 20:01:10.624275  115591 system_pods.go:61] "kube-scheduler-embed-certs-209025" [cc5d4d6f-515d-48b9-8d6f-83c33b0fa037] Running
	I1206 20:01:10.624282  115591 system_pods.go:61] "metrics-server-57f55c9bc5-5qxxj" [4eaddb4b-aec0-4cc7-b467-bb882bcba8a0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:10.624286  115591 system_pods.go:61] "storage-provisioner" [2417fc35-04fd-4dcf-9d16-2649a0d3bb3b] Running
	I1206 20:01:10.624296  115591 system_pods.go:74] duration metric: took 179.481721ms to wait for pod list to return data ...
	I1206 20:01:10.624306  115591 default_sa.go:34] waiting for default service account to be created ...
	I1206 20:01:10.818715  115591 default_sa.go:45] found service account: "default"
	I1206 20:01:10.818741  115591 default_sa.go:55] duration metric: took 194.428895ms for default service account to be created ...
	I1206 20:01:10.818750  115591 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 20:01:11.022686  115591 system_pods.go:86] 9 kube-system pods found
	I1206 20:01:11.022713  115591 system_pods.go:89] "coredns-5dd5756b68-57z8q" [24c81a49-d80e-47df-86d2-0056ccc25858] Running
	I1206 20:01:11.022718  115591 system_pods.go:89] "coredns-5dd5756b68-8lsns" [14c5f16e-0c30-4602-b772-c6e0c8a577a8] Running
	I1206 20:01:11.022722  115591 system_pods.go:89] "etcd-embed-certs-209025" [e352dba2-c22b-4b21-9cb7-d641d29307a0] Running
	I1206 20:01:11.022726  115591 system_pods.go:89] "kube-apiserver-embed-certs-209025" [b4bfe0d1-0f1f-4e5e-96a4-94ec19cc1ab4] Running
	I1206 20:01:11.022730  115591 system_pods.go:89] "kube-controller-manager-embed-certs-209025" [1e9819fc-0187-4410-97f5-a517fb6b6595] Running
	I1206 20:01:11.022734  115591 system_pods.go:89] "kube-proxy-nf2cw" [5e49b3f8-7eee-4c04-ae22-75ccd216bb27] Running
	I1206 20:01:11.022738  115591 system_pods.go:89] "kube-scheduler-embed-certs-209025" [cc5d4d6f-515d-48b9-8d6f-83c33b0fa037] Running
	I1206 20:01:11.022744  115591 system_pods.go:89] "metrics-server-57f55c9bc5-5qxxj" [4eaddb4b-aec0-4cc7-b467-bb882bcba8a0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:11.022750  115591 system_pods.go:89] "storage-provisioner" [2417fc35-04fd-4dcf-9d16-2649a0d3bb3b] Running
	I1206 20:01:11.022762  115591 system_pods.go:126] duration metric: took 204.004835ms to wait for k8s-apps to be running ...
	I1206 20:01:11.022774  115591 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 20:01:11.022824  115591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 20:01:11.041212  115591 system_svc.go:56] duration metric: took 18.424469ms WaitForService to wait for kubelet.
	I1206 20:01:11.041256  115591 kubeadm.go:581] duration metric: took 6.578354937s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1206 20:01:11.041291  115591 node_conditions.go:102] verifying NodePressure condition ...
	I1206 20:01:11.219045  115591 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1206 20:01:11.219079  115591 node_conditions.go:123] node cpu capacity is 2
	I1206 20:01:11.219094  115591 node_conditions.go:105] duration metric: took 177.793737ms to run NodePressure ...
	I1206 20:01:11.219107  115591 start.go:228] waiting for startup goroutines ...
	I1206 20:01:11.219113  115591 start.go:233] waiting for cluster config update ...
	I1206 20:01:11.219125  115591 start.go:242] writing updated cluster config ...
	I1206 20:01:11.219482  115591 ssh_runner.go:195] Run: rm -f paused
	I1206 20:01:11.275863  115591 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1206 20:01:11.278074  115591 out.go:177] * Done! kubectl is now configured to use "embed-certs-209025" cluster and "default" namespace by default
	I1206 20:01:09.099590  115217 pod_ready.go:92] pod "coredns-5644d7b6d9-2nncf" in "kube-system" namespace has status "Ready":"True"
	I1206 20:01:09.099616  115217 pod_ready.go:81] duration metric: took 8.363590309s waiting for pod "coredns-5644d7b6d9-2nncf" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:09.099626  115217 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-f627j" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:09.103452  115217 pod_ready.go:97] error getting pod "coredns-5644d7b6d9-f627j" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-f627j" not found
	I1206 20:01:09.103485  115217 pod_ready.go:81] duration metric: took 3.845902ms waiting for pod "coredns-5644d7b6d9-f627j" in "kube-system" namespace to be "Ready" ...
	E1206 20:01:09.103499  115217 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5644d7b6d9-f627j" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-f627j" not found
	I1206 20:01:09.103507  115217 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wvqmw" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:09.110700  115217 pod_ready.go:92] pod "kube-proxy-wvqmw" in "kube-system" namespace has status "Ready":"True"
	I1206 20:01:09.110721  115217 pod_ready.go:81] duration metric: took 7.207091ms waiting for pod "kube-proxy-wvqmw" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:09.110729  115217 pod_ready.go:38] duration metric: took 8.477100108s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 20:01:09.110744  115217 api_server.go:52] waiting for apiserver process to appear ...
	I1206 20:01:09.110791  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 20:01:09.127244  115217 api_server.go:72] duration metric: took 8.855777965s to wait for apiserver process to appear ...
	I1206 20:01:09.127272  115217 api_server.go:88] waiting for apiserver healthz status ...
	I1206 20:01:09.127290  115217 api_server.go:253] Checking apiserver healthz at https://192.168.61.33:8443/healthz ...
	I1206 20:01:09.134411  115217 api_server.go:279] https://192.168.61.33:8443/healthz returned 200:
	ok
	I1206 20:01:09.135553  115217 api_server.go:141] control plane version: v1.16.0
	I1206 20:01:09.135578  115217 api_server.go:131] duration metric: took 8.298936ms to wait for apiserver health ...
	I1206 20:01:09.135589  115217 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 20:01:09.140145  115217 system_pods.go:59] 4 kube-system pods found
	I1206 20:01:09.140167  115217 system_pods.go:61] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:09.140172  115217 system_pods.go:61] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:09.140178  115217 system_pods.go:61] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:09.140183  115217 system_pods.go:61] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:09.140191  115217 system_pods.go:74] duration metric: took 4.595695ms to wait for pod list to return data ...
	I1206 20:01:09.140198  115217 default_sa.go:34] waiting for default service account to be created ...
	I1206 20:01:09.142852  115217 default_sa.go:45] found service account: "default"
	I1206 20:01:09.142877  115217 default_sa.go:55] duration metric: took 2.67139ms for default service account to be created ...
	I1206 20:01:09.142888  115217 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 20:01:09.145800  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:09.145822  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:09.145827  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:09.145833  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:09.145838  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:09.145856  115217 retry.go:31] will retry after 199.361191ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:09.351430  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:09.351475  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:09.351485  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:09.351497  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:09.351504  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:09.351529  115217 retry.go:31] will retry after 239.084983ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:09.595441  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:09.595479  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:09.595487  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:09.595498  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:09.595506  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:09.595528  115217 retry.go:31] will retry after 380.909676ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:09.982061  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:09.982088  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:09.982093  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:09.982101  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:09.982115  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:09.982133  115217 retry.go:31] will retry after 451.472574ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:10.439270  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:10.439303  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:10.439311  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:10.439321  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:10.439328  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:10.439350  115217 retry.go:31] will retry after 654.845182ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:11.101088  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:11.101129  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:11.101137  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:11.101147  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:11.101155  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:11.101178  115217 retry.go:31] will retry after 650.939663ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:11.757024  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:11.757053  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:11.757058  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:11.757065  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:11.757070  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:11.757088  115217 retry.go:31] will retry after 828.555469ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:12.591156  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:12.591193  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:12.591209  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:12.591220  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:12.591227  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:12.591254  115217 retry.go:31] will retry after 1.26518336s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:11.000472  115078 logs.go:123] Gathering logs for coredns [93aee471c37fc8008cd5a0ce741944f249d9cbf7a5cbb62828369850236b0f07] ...
	I1206 20:01:11.000505  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93aee471c37fc8008cd5a0ce741944f249d9cbf7a5cbb62828369850236b0f07"
	I1206 20:01:13.545345  115078 api_server.go:253] Checking apiserver healthz at https://192.168.39.5:8443/healthz ...
	I1206 20:01:13.551262  115078 api_server.go:279] https://192.168.39.5:8443/healthz returned 200:
	ok
	I1206 20:01:13.553129  115078 api_server.go:141] control plane version: v1.29.0-rc.1
	I1206 20:01:13.553161  115078 api_server.go:131] duration metric: took 4.022898619s to wait for apiserver health ...
	I1206 20:01:13.553173  115078 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 20:01:13.553204  115078 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 20:01:13.553287  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 20:01:13.619861  115078 cri.go:89] found id: "f5b4ca951aec7e7912fa233772fa1e5a4cceffad95c297ea5b1b968ce835d2eb"
	I1206 20:01:13.619892  115078 cri.go:89] found id: ""
	I1206 20:01:13.619903  115078 logs.go:284] 1 containers: [f5b4ca951aec7e7912fa233772fa1e5a4cceffad95c297ea5b1b968ce835d2eb]
	I1206 20:01:13.619994  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:13.625028  115078 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 20:01:13.625099  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 20:01:13.667275  115078 cri.go:89] found id: "7633ca5afa8aeb44e4c450285edb3b4ca09bd881a87e959894d7740313a7d861"
	I1206 20:01:13.667300  115078 cri.go:89] found id: ""
	I1206 20:01:13.667309  115078 logs.go:284] 1 containers: [7633ca5afa8aeb44e4c450285edb3b4ca09bd881a87e959894d7740313a7d861]
	I1206 20:01:13.667378  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:13.671673  115078 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 20:01:13.671740  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 20:01:13.713319  115078 cri.go:89] found id: "93aee471c37fc8008cd5a0ce741944f249d9cbf7a5cbb62828369850236b0f07"
	I1206 20:01:13.713351  115078 cri.go:89] found id: ""
	I1206 20:01:13.713361  115078 logs.go:284] 1 containers: [93aee471c37fc8008cd5a0ce741944f249d9cbf7a5cbb62828369850236b0f07]
	I1206 20:01:13.713428  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:13.718155  115078 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 20:01:13.718219  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 20:01:13.758383  115078 cri.go:89] found id: "c00065611a1f7003ff8edd2ac629e3c6bbdfa4e1167b1dfd412ee16e5a9d3dcd"
	I1206 20:01:13.758414  115078 cri.go:89] found id: ""
	I1206 20:01:13.758424  115078 logs.go:284] 1 containers: [c00065611a1f7003ff8edd2ac629e3c6bbdfa4e1167b1dfd412ee16e5a9d3dcd]
	I1206 20:01:13.758488  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:13.762747  115078 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 20:01:13.762826  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 20:01:13.803602  115078 cri.go:89] found id: "0da9ad5d9749cba59bd11e3aa5dd332ad76b5bdd0fcde476ce333d4069f0e259"
	I1206 20:01:13.803627  115078 cri.go:89] found id: ""
	I1206 20:01:13.803635  115078 logs.go:284] 1 containers: [0da9ad5d9749cba59bd11e3aa5dd332ad76b5bdd0fcde476ce333d4069f0e259]
	I1206 20:01:13.803685  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:13.808083  115078 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 20:01:13.808160  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 20:01:13.852504  115078 cri.go:89] found id: "43c8e91cea581258f9bf13a71487b9ffadc02ac982f7ff08fa649092e171fa87"
	I1206 20:01:13.852531  115078 cri.go:89] found id: ""
	I1206 20:01:13.852539  115078 logs.go:284] 1 containers: [43c8e91cea581258f9bf13a71487b9ffadc02ac982f7ff08fa649092e171fa87]
	I1206 20:01:13.852598  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:13.857213  115078 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 20:01:13.857322  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 20:01:13.896981  115078 cri.go:89] found id: ""
	I1206 20:01:13.897023  115078 logs.go:284] 0 containers: []
	W1206 20:01:13.897035  115078 logs.go:286] No container was found matching "kindnet"
	I1206 20:01:13.897044  115078 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 20:01:13.897110  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 20:01:13.940969  115078 cri.go:89] found id: "ec1601a49c79cb7e81473ecbb6e4506b82fc3b4942e91392653a6991579c5617"
	I1206 20:01:13.940996  115078 cri.go:89] found id: "d07b3a050ef1964c57b85cc91a3d17646fef6d2298b91d72c007f432c51942c9"
	I1206 20:01:13.941004  115078 cri.go:89] found id: ""
	I1206 20:01:13.941013  115078 logs.go:284] 2 containers: [ec1601a49c79cb7e81473ecbb6e4506b82fc3b4942e91392653a6991579c5617 d07b3a050ef1964c57b85cc91a3d17646fef6d2298b91d72c007f432c51942c9]
	I1206 20:01:13.941075  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:13.945508  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:13.949933  115078 logs.go:123] Gathering logs for kube-scheduler [c00065611a1f7003ff8edd2ac629e3c6bbdfa4e1167b1dfd412ee16e5a9d3dcd] ...
	I1206 20:01:13.949961  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c00065611a1f7003ff8edd2ac629e3c6bbdfa4e1167b1dfd412ee16e5a9d3dcd"
	I1206 20:01:13.986034  115078 logs.go:123] Gathering logs for kube-controller-manager [43c8e91cea581258f9bf13a71487b9ffadc02ac982f7ff08fa649092e171fa87] ...
	I1206 20:01:13.986065  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 43c8e91cea581258f9bf13a71487b9ffadc02ac982f7ff08fa649092e171fa87"
	I1206 20:01:14.045155  115078 logs.go:123] Gathering logs for storage-provisioner [ec1601a49c79cb7e81473ecbb6e4506b82fc3b4942e91392653a6991579c5617] ...
	I1206 20:01:14.045197  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec1601a49c79cb7e81473ecbb6e4506b82fc3b4942e91392653a6991579c5617"
	I1206 20:01:14.091205  115078 logs.go:123] Gathering logs for storage-provisioner [d07b3a050ef1964c57b85cc91a3d17646fef6d2298b91d72c007f432c51942c9] ...
	I1206 20:01:14.091240  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d07b3a050ef1964c57b85cc91a3d17646fef6d2298b91d72c007f432c51942c9"
	I1206 20:01:14.130184  115078 logs.go:123] Gathering logs for container status ...
	I1206 20:01:14.130221  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 20:01:14.176981  115078 logs.go:123] Gathering logs for dmesg ...
	I1206 20:01:14.177024  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 20:01:14.191755  115078 logs.go:123] Gathering logs for describe nodes ...
	I1206 20:01:14.191796  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1206 20:01:14.316375  115078 logs.go:123] Gathering logs for etcd [7633ca5afa8aeb44e4c450285edb3b4ca09bd881a87e959894d7740313a7d861] ...
	I1206 20:01:14.316413  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7633ca5afa8aeb44e4c450285edb3b4ca09bd881a87e959894d7740313a7d861"
	I1206 20:01:14.359700  115078 logs.go:123] Gathering logs for kubelet ...
	I1206 20:01:14.359746  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 20:01:14.415906  115078 logs.go:123] Gathering logs for kube-apiserver [f5b4ca951aec7e7912fa233772fa1e5a4cceffad95c297ea5b1b968ce835d2eb] ...
	I1206 20:01:14.415952  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5b4ca951aec7e7912fa233772fa1e5a4cceffad95c297ea5b1b968ce835d2eb"
	I1206 20:01:14.471453  115078 logs.go:123] Gathering logs for kube-proxy [0da9ad5d9749cba59bd11e3aa5dd332ad76b5bdd0fcde476ce333d4069f0e259] ...
	I1206 20:01:14.471496  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0da9ad5d9749cba59bd11e3aa5dd332ad76b5bdd0fcde476ce333d4069f0e259"
	I1206 20:01:14.520012  115078 logs.go:123] Gathering logs for coredns [93aee471c37fc8008cd5a0ce741944f249d9cbf7a5cbb62828369850236b0f07] ...
	I1206 20:01:14.520051  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93aee471c37fc8008cd5a0ce741944f249d9cbf7a5cbb62828369850236b0f07"
	I1206 20:01:14.567445  115078 logs.go:123] Gathering logs for CRI-O ...
	I1206 20:01:14.567482  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 20:01:17.434636  115078 system_pods.go:59] 8 kube-system pods found
	I1206 20:01:17.434671  115078 system_pods.go:61] "coredns-76f75df574-h9pkz" [05501356-bf9b-4a99-a1b9-40d0caef38db] Running
	I1206 20:01:17.434676  115078 system_pods.go:61] "etcd-no-preload-989559" [6c1cb748-a6a8-4583-b8fd-adf37e05b771] Running
	I1206 20:01:17.434680  115078 system_pods.go:61] "kube-apiserver-no-preload-989559" [51d8b7c6-0cef-4832-96b2-5040c0725310] Running
	I1206 20:01:17.434685  115078 system_pods.go:61] "kube-controller-manager-no-preload-989559" [cc8dfb88-9990-488f-9150-5c643143dcf1] Running
	I1206 20:01:17.434688  115078 system_pods.go:61] "kube-proxy-zgqvt" [550b2491-c14f-47c4-82d5-1301fa351305] Running
	I1206 20:01:17.434692  115078 system_pods.go:61] "kube-scheduler-no-preload-989559" [53a5031e-51aa-4867-88ff-1c7972a0cfa7] Running
	I1206 20:01:17.434700  115078 system_pods.go:61] "metrics-server-57f55c9bc5-vz7qc" [97c1bcd2-eabc-4029-bb02-5bbfd4d96c0f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:17.434706  115078 system_pods.go:61] "storage-provisioner" [c4d98de3-12ec-47f6-a6a6-f1dc61b479be] Running
	I1206 20:01:17.434714  115078 system_pods.go:74] duration metric: took 3.881535405s to wait for pod list to return data ...
	I1206 20:01:17.434724  115078 default_sa.go:34] waiting for default service account to be created ...
	I1206 20:01:17.437744  115078 default_sa.go:45] found service account: "default"
	I1206 20:01:17.437770  115078 default_sa.go:55] duration metric: took 3.038532ms for default service account to be created ...
	I1206 20:01:17.437780  115078 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 20:01:17.444539  115078 system_pods.go:86] 8 kube-system pods found
	I1206 20:01:17.444567  115078 system_pods.go:89] "coredns-76f75df574-h9pkz" [05501356-bf9b-4a99-a1b9-40d0caef38db] Running
	I1206 20:01:17.444572  115078 system_pods.go:89] "etcd-no-preload-989559" [6c1cb748-a6a8-4583-b8fd-adf37e05b771] Running
	I1206 20:01:17.444577  115078 system_pods.go:89] "kube-apiserver-no-preload-989559" [51d8b7c6-0cef-4832-96b2-5040c0725310] Running
	I1206 20:01:17.444583  115078 system_pods.go:89] "kube-controller-manager-no-preload-989559" [cc8dfb88-9990-488f-9150-5c643143dcf1] Running
	I1206 20:01:17.444587  115078 system_pods.go:89] "kube-proxy-zgqvt" [550b2491-c14f-47c4-82d5-1301fa351305] Running
	I1206 20:01:17.444592  115078 system_pods.go:89] "kube-scheduler-no-preload-989559" [53a5031e-51aa-4867-88ff-1c7972a0cfa7] Running
	I1206 20:01:17.444602  115078 system_pods.go:89] "metrics-server-57f55c9bc5-vz7qc" [97c1bcd2-eabc-4029-bb02-5bbfd4d96c0f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:17.444608  115078 system_pods.go:89] "storage-provisioner" [c4d98de3-12ec-47f6-a6a6-f1dc61b479be] Running
	I1206 20:01:17.444619  115078 system_pods.go:126] duration metric: took 6.832576ms to wait for k8s-apps to be running ...
	I1206 20:01:17.444629  115078 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 20:01:17.444687  115078 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 20:01:17.464821  115078 system_svc.go:56] duration metric: took 20.181153ms WaitForService to wait for kubelet.
	I1206 20:01:17.464866  115078 kubeadm.go:581] duration metric: took 4m24.398841426s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1206 20:01:17.464894  115078 node_conditions.go:102] verifying NodePressure condition ...
	I1206 20:01:17.467938  115078 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1206 20:01:17.467964  115078 node_conditions.go:123] node cpu capacity is 2
	I1206 20:01:17.467975  115078 node_conditions.go:105] duration metric: took 3.076458ms to run NodePressure ...
	I1206 20:01:17.467988  115078 start.go:228] waiting for startup goroutines ...
	I1206 20:01:17.467994  115078 start.go:233] waiting for cluster config update ...
	I1206 20:01:17.468004  115078 start.go:242] writing updated cluster config ...
	I1206 20:01:17.468290  115078 ssh_runner.go:195] Run: rm -f paused
	I1206 20:01:17.523451  115078 start.go:600] kubectl: 1.28.4, cluster: 1.29.0-rc.1 (minor skew: 1)
	I1206 20:01:17.525609  115078 out.go:177] * Done! kubectl is now configured to use "no-preload-989559" cluster and "default" namespace by default
	I1206 20:01:13.862479  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:13.862506  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:13.862512  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:13.862519  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:13.862523  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:13.862542  115217 retry.go:31] will retry after 1.299046526s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:15.166601  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:15.166630  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:15.166635  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:15.166642  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:15.166647  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:15.166667  115217 retry.go:31] will retry after 1.832151574s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:17.005707  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:17.005739  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:17.005746  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:17.005754  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:17.005774  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:17.005797  115217 retry.go:31] will retry after 1.796371959s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:18.808729  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:18.808757  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:18.808763  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:18.808770  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:18.808775  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:18.808792  115217 retry.go:31] will retry after 2.814845209s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:21.630762  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:21.630791  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:21.630796  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:21.630811  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:21.630816  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:21.630834  115217 retry.go:31] will retry after 2.866148194s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:24.502168  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:24.502198  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:24.502203  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:24.502211  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:24.502215  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:24.502233  115217 retry.go:31] will retry after 3.777894628s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:28.284776  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:28.284812  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:28.284818  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:28.284825  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:28.284829  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:28.284847  115217 retry.go:31] will retry after 4.837538668s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:33.127301  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:33.127330  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:33.127336  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:33.127344  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:33.127349  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:33.127370  115217 retry.go:31] will retry after 6.833662344s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:39.966417  115217 system_pods.go:86] 5 kube-system pods found
	I1206 20:01:39.966450  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:39.966458  115217 system_pods.go:89] "kube-apiserver-old-k8s-version-448851" [ecace4aa-bc86-43ed-9067-365504abbf70] Pending
	I1206 20:01:39.966465  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:39.966476  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:39.966483  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:39.966504  115217 retry.go:31] will retry after 9.204033337s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:49.176395  115217 system_pods.go:86] 8 kube-system pods found
	I1206 20:01:49.176434  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:49.176442  115217 system_pods.go:89] "etcd-old-k8s-version-448851" [91d55b2e-4361-4615-a99c-d1338c427d81] Pending
	I1206 20:01:49.176450  115217 system_pods.go:89] "kube-apiserver-old-k8s-version-448851" [ecace4aa-bc86-43ed-9067-365504abbf70] Running
	I1206 20:01:49.176457  115217 system_pods.go:89] "kube-controller-manager-old-k8s-version-448851" [cf55eb16-4a36-4d70-bb22-4cab5f9f7358] Running
	I1206 20:01:49.176462  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:49.176469  115217 system_pods.go:89] "kube-scheduler-old-k8s-version-448851" [373cb698-190a-480d-ac74-4ea990474ad1] Pending
	I1206 20:01:49.176479  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:49.176487  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:49.176511  115217 retry.go:31] will retry after 9.456016194s: missing components: etcd, kube-scheduler
	I1206 20:01:58.638807  115217 system_pods.go:86] 8 kube-system pods found
	I1206 20:01:58.638837  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:58.638842  115217 system_pods.go:89] "etcd-old-k8s-version-448851" [91d55b2e-4361-4615-a99c-d1338c427d81] Running
	I1206 20:01:58.638847  115217 system_pods.go:89] "kube-apiserver-old-k8s-version-448851" [ecace4aa-bc86-43ed-9067-365504abbf70] Running
	I1206 20:01:58.638851  115217 system_pods.go:89] "kube-controller-manager-old-k8s-version-448851" [cf55eb16-4a36-4d70-bb22-4cab5f9f7358] Running
	I1206 20:01:58.638855  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:58.638861  115217 system_pods.go:89] "kube-scheduler-old-k8s-version-448851" [373cb698-190a-480d-ac74-4ea990474ad1] Running
	I1206 20:01:58.638867  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:58.638872  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:58.638879  115217 system_pods.go:126] duration metric: took 49.495986809s to wait for k8s-apps to be running ...
	I1206 20:01:58.638886  115217 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 20:01:58.638935  115217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 20:01:58.654683  115217 system_svc.go:56] duration metric: took 15.783018ms WaitForService to wait for kubelet.
	I1206 20:01:58.654715  115217 kubeadm.go:581] duration metric: took 58.383258338s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1206 20:01:58.654738  115217 node_conditions.go:102] verifying NodePressure condition ...
	I1206 20:01:58.659189  115217 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1206 20:01:58.659215  115217 node_conditions.go:123] node cpu capacity is 2
	I1206 20:01:58.659226  115217 node_conditions.go:105] duration metric: took 4.482979ms to run NodePressure ...
	I1206 20:01:58.659239  115217 start.go:228] waiting for startup goroutines ...
	I1206 20:01:58.659245  115217 start.go:233] waiting for cluster config update ...
	I1206 20:01:58.659255  115217 start.go:242] writing updated cluster config ...
	I1206 20:01:58.659522  115217 ssh_runner.go:195] Run: rm -f paused
	I1206 20:01:58.710716  115217 start.go:600] kubectl: 1.28.4, cluster: 1.16.0 (minor skew: 12)
	I1206 20:01:58.713372  115217 out.go:177] 
	W1206 20:01:58.714711  115217 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.16.0.
	I1206 20:01:58.716208  115217 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1206 20:01:58.717734  115217 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-448851" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-12-06 19:54:55 UTC, ends at Wed 2023-12-06 20:11:00 UTC. --
	Dec 06 20:11:00 old-k8s-version-448851 crio[712]: time="2023-12-06 20:11:00.404329660Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701893460404314938,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=6a9a4af5-4119-4c94-9a49-19c5f8ce8a8d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 20:11:00 old-k8s-version-448851 crio[712]: time="2023-12-06 20:11:00.404777124Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=55948fc2-321c-4e4a-9eb3-dc3d7eff5413 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:11:00 old-k8s-version-448851 crio[712]: time="2023-12-06 20:11:00.404918533Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=55948fc2-321c-4e4a-9eb3-dc3d7eff5413 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:11:00 old-k8s-version-448851 crio[712]: time="2023-12-06 20:11:00.405092863Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0268a45cb6867f331dc457f17d2b30a94d3ed6e0096e2b4f24e3cf7bcab18d7e,PodSandboxId:30ccdc4107ffbdfae1ae76b136f0631fd2be267d12e6762906b0e182cce7016d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701892863377223159,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6883ede-d439-42a2-93aa-a5fa9e2734c6,},Annotations:map[string]string{io.kubernetes.container.hash: 245502d1,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0de730d3d80f9b4ccda3a4a263a0af4eec2fc190737aea02ccac69353cf5d242,PodSandboxId:90bef1ca16b739842aa13359c92662832704dc8e2f0b166127372ed39b72cf7d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1701892862594429591,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wvqmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8ae872e-3784-4fcc-a09c-82c56b3fcc05,},Annotations:map[string]string{io.kubernetes.container.hash: f328273f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff3e0be26327f950f977a92038b6268aa4d4d147690d95151432e4212fdef94f,PodSandboxId:4ecead5f9543561f96015c444968c59eac4cb0b0fadbc1785686392f9aa7f6a5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1701892860700018447,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-2nncf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6deb121-7406-4c9b-be7d-45b8b927c633,},Annotations:map[string]string{io.kubernetes.container.hash: e8af29be,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c383b9ccb2a1831725c86c7081f7006f905d6a2056c6479970649187f93acf2,PodSandboxId:8ef2aca28417874f8b1d6f5e7846c09e7d09bdbfca9bcc1dd4d7a81ca52d8c7e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1701892836051407080,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-448851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a73d7c6d9532e36b67d907cf5d7d0492,},Annotations:map[s
tring]string{io.kubernetes.container.hash: f72f8c18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19e2a17fb2cb9ca9163abd44515140e0be53b2f15eef72c2e2e872a93d767ddd,PodSandboxId:2cc7c0d14124e247d1439e8b1dfd26e9d280ad73e50f1a577085e6157254500c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1701892834836147321,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-448851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06212ee2a32f77acb29faf0fc6feca3a8a3c3d0820299d33947df28671af3a53,PodSandboxId:a88c3a3d24e686bd69ba1ad4b03a49872a0dd7c4453d3ba719f36db9d66883d1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1701892834512278739,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-448851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a03d08bf855b9a5126a45dae7bafe41cf417a67c53c8573269c73979be322e4,PodSandboxId:070958b68242361d0e12fc2f0ba283bde3e8d48cc14fe02e1b8393153e05b8d4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1701892833805241536,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-448851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 198a53bc90d7f2fd0cd5ce4edbeef394,},Annotations:ma
p[string]string{io.kubernetes.container.hash: e7fa7d16,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46fe8c39d7ac63062240fa759515c05e9906abeb3581d184b7701d2441104a69,PodSandboxId:070958b68242361d0e12fc2f0ba283bde3e8d48cc14fe02e1b8393153e05b8d4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_EXITED,CreatedAt:1701892527069646128,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-448851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 198a53bc90d7f2fd0cd5ce4edbeef394,},Annotations:map[string]str
ing{io.kubernetes.container.hash: e7fa7d16,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=55948fc2-321c-4e4a-9eb3-dc3d7eff5413 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:11:00 old-k8s-version-448851 crio[712]: time="2023-12-06 20:11:00.471502957Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=5b54768c-5cb6-45de-a38b-505255f5a2db name=/runtime.v1.RuntimeService/Version
	Dec 06 20:11:00 old-k8s-version-448851 crio[712]: time="2023-12-06 20:11:00.471581240Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=5b54768c-5cb6-45de-a38b-505255f5a2db name=/runtime.v1.RuntimeService/Version
	Dec 06 20:11:00 old-k8s-version-448851 crio[712]: time="2023-12-06 20:11:00.473006996Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=ee9ff3fe-fdb7-4e75-98d1-64e3b50107a8 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 20:11:00 old-k8s-version-448851 crio[712]: time="2023-12-06 20:11:00.473463626Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701893460473445441,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=ee9ff3fe-fdb7-4e75-98d1-64e3b50107a8 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 20:11:00 old-k8s-version-448851 crio[712]: time="2023-12-06 20:11:00.474064438Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=99d9ef7c-58d3-4b5b-8aa1-cbdd37507fd2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:11:00 old-k8s-version-448851 crio[712]: time="2023-12-06 20:11:00.474145838Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=99d9ef7c-58d3-4b5b-8aa1-cbdd37507fd2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:11:00 old-k8s-version-448851 crio[712]: time="2023-12-06 20:11:00.474326162Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0268a45cb6867f331dc457f17d2b30a94d3ed6e0096e2b4f24e3cf7bcab18d7e,PodSandboxId:30ccdc4107ffbdfae1ae76b136f0631fd2be267d12e6762906b0e182cce7016d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701892863377223159,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6883ede-d439-42a2-93aa-a5fa9e2734c6,},Annotations:map[string]string{io.kubernetes.container.hash: 245502d1,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0de730d3d80f9b4ccda3a4a263a0af4eec2fc190737aea02ccac69353cf5d242,PodSandboxId:90bef1ca16b739842aa13359c92662832704dc8e2f0b166127372ed39b72cf7d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1701892862594429591,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wvqmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8ae872e-3784-4fcc-a09c-82c56b3fcc05,},Annotations:map[string]string{io.kubernetes.container.hash: f328273f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff3e0be26327f950f977a92038b6268aa4d4d147690d95151432e4212fdef94f,PodSandboxId:4ecead5f9543561f96015c444968c59eac4cb0b0fadbc1785686392f9aa7f6a5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1701892860700018447,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-2nncf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6deb121-7406-4c9b-be7d-45b8b927c633,},Annotations:map[string]string{io.kubernetes.container.hash: e8af29be,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c383b9ccb2a1831725c86c7081f7006f905d6a2056c6479970649187f93acf2,PodSandboxId:8ef2aca28417874f8b1d6f5e7846c09e7d09bdbfca9bcc1dd4d7a81ca52d8c7e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1701892836051407080,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-448851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a73d7c6d9532e36b67d907cf5d7d0492,},Annotations:map[s
tring]string{io.kubernetes.container.hash: f72f8c18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19e2a17fb2cb9ca9163abd44515140e0be53b2f15eef72c2e2e872a93d767ddd,PodSandboxId:2cc7c0d14124e247d1439e8b1dfd26e9d280ad73e50f1a577085e6157254500c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1701892834836147321,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-448851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06212ee2a32f77acb29faf0fc6feca3a8a3c3d0820299d33947df28671af3a53,PodSandboxId:a88c3a3d24e686bd69ba1ad4b03a49872a0dd7c4453d3ba719f36db9d66883d1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1701892834512278739,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-448851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a03d08bf855b9a5126a45dae7bafe41cf417a67c53c8573269c73979be322e4,PodSandboxId:070958b68242361d0e12fc2f0ba283bde3e8d48cc14fe02e1b8393153e05b8d4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1701892833805241536,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-448851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 198a53bc90d7f2fd0cd5ce4edbeef394,},Annotations:ma
p[string]string{io.kubernetes.container.hash: e7fa7d16,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46fe8c39d7ac63062240fa759515c05e9906abeb3581d184b7701d2441104a69,PodSandboxId:070958b68242361d0e12fc2f0ba283bde3e8d48cc14fe02e1b8393153e05b8d4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_EXITED,CreatedAt:1701892527069646128,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-448851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 198a53bc90d7f2fd0cd5ce4edbeef394,},Annotations:map[string]str
ing{io.kubernetes.container.hash: e7fa7d16,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=99d9ef7c-58d3-4b5b-8aa1-cbdd37507fd2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:11:00 old-k8s-version-448851 crio[712]: time="2023-12-06 20:11:00.519165077Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=e79187e0-c737-44ba-a334-7a9f0f5df21c name=/runtime.v1.RuntimeService/Version
	Dec 06 20:11:00 old-k8s-version-448851 crio[712]: time="2023-12-06 20:11:00.519246374Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=e79187e0-c737-44ba-a334-7a9f0f5df21c name=/runtime.v1.RuntimeService/Version
	Dec 06 20:11:00 old-k8s-version-448851 crio[712]: time="2023-12-06 20:11:00.520425612Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=619a6b9b-2d7c-4651-b9ce-1fc66d60c349 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 20:11:00 old-k8s-version-448851 crio[712]: time="2023-12-06 20:11:00.520916536Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701893460520902924,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=619a6b9b-2d7c-4651-b9ce-1fc66d60c349 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 20:11:00 old-k8s-version-448851 crio[712]: time="2023-12-06 20:11:00.521451884Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4c64e036-c8b0-4447-b822-bcc3a3eb6be4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:11:00 old-k8s-version-448851 crio[712]: time="2023-12-06 20:11:00.521514042Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4c64e036-c8b0-4447-b822-bcc3a3eb6be4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:11:00 old-k8s-version-448851 crio[712]: time="2023-12-06 20:11:00.521730275Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0268a45cb6867f331dc457f17d2b30a94d3ed6e0096e2b4f24e3cf7bcab18d7e,PodSandboxId:30ccdc4107ffbdfae1ae76b136f0631fd2be267d12e6762906b0e182cce7016d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701892863377223159,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6883ede-d439-42a2-93aa-a5fa9e2734c6,},Annotations:map[string]string{io.kubernetes.container.hash: 245502d1,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0de730d3d80f9b4ccda3a4a263a0af4eec2fc190737aea02ccac69353cf5d242,PodSandboxId:90bef1ca16b739842aa13359c92662832704dc8e2f0b166127372ed39b72cf7d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1701892862594429591,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wvqmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8ae872e-3784-4fcc-a09c-82c56b3fcc05,},Annotations:map[string]string{io.kubernetes.container.hash: f328273f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff3e0be26327f950f977a92038b6268aa4d4d147690d95151432e4212fdef94f,PodSandboxId:4ecead5f9543561f96015c444968c59eac4cb0b0fadbc1785686392f9aa7f6a5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1701892860700018447,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-2nncf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6deb121-7406-4c9b-be7d-45b8b927c633,},Annotations:map[string]string{io.kubernetes.container.hash: e8af29be,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c383b9ccb2a1831725c86c7081f7006f905d6a2056c6479970649187f93acf2,PodSandboxId:8ef2aca28417874f8b1d6f5e7846c09e7d09bdbfca9bcc1dd4d7a81ca52d8c7e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1701892836051407080,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-448851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a73d7c6d9532e36b67d907cf5d7d0492,},Annotations:map[s
tring]string{io.kubernetes.container.hash: f72f8c18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19e2a17fb2cb9ca9163abd44515140e0be53b2f15eef72c2e2e872a93d767ddd,PodSandboxId:2cc7c0d14124e247d1439e8b1dfd26e9d280ad73e50f1a577085e6157254500c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1701892834836147321,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-448851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06212ee2a32f77acb29faf0fc6feca3a8a3c3d0820299d33947df28671af3a53,PodSandboxId:a88c3a3d24e686bd69ba1ad4b03a49872a0dd7c4453d3ba719f36db9d66883d1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1701892834512278739,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-448851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a03d08bf855b9a5126a45dae7bafe41cf417a67c53c8573269c73979be322e4,PodSandboxId:070958b68242361d0e12fc2f0ba283bde3e8d48cc14fe02e1b8393153e05b8d4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1701892833805241536,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-448851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 198a53bc90d7f2fd0cd5ce4edbeef394,},Annotations:ma
p[string]string{io.kubernetes.container.hash: e7fa7d16,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46fe8c39d7ac63062240fa759515c05e9906abeb3581d184b7701d2441104a69,PodSandboxId:070958b68242361d0e12fc2f0ba283bde3e8d48cc14fe02e1b8393153e05b8d4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_EXITED,CreatedAt:1701892527069646128,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-448851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 198a53bc90d7f2fd0cd5ce4edbeef394,},Annotations:map[string]str
ing{io.kubernetes.container.hash: e7fa7d16,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4c64e036-c8b0-4447-b822-bcc3a3eb6be4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:11:00 old-k8s-version-448851 crio[712]: time="2023-12-06 20:11:00.557620370Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=9c8c0df4-d948-4dec-8a37-0b1b06d33521 name=/runtime.v1.RuntimeService/Version
	Dec 06 20:11:00 old-k8s-version-448851 crio[712]: time="2023-12-06 20:11:00.557680103Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=9c8c0df4-d948-4dec-8a37-0b1b06d33521 name=/runtime.v1.RuntimeService/Version
	Dec 06 20:11:00 old-k8s-version-448851 crio[712]: time="2023-12-06 20:11:00.559329771Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=02f9d00d-3403-464d-bea4-df052467ca2c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 20:11:00 old-k8s-version-448851 crio[712]: time="2023-12-06 20:11:00.559686368Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701893460559675618,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=02f9d00d-3403-464d-bea4-df052467ca2c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 20:11:00 old-k8s-version-448851 crio[712]: time="2023-12-06 20:11:00.560288859Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7f8a9b6d-79b5-46a4-9831-8c312df2633c name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:11:00 old-k8s-version-448851 crio[712]: time="2023-12-06 20:11:00.560334086Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7f8a9b6d-79b5-46a4-9831-8c312df2633c name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:11:00 old-k8s-version-448851 crio[712]: time="2023-12-06 20:11:00.560516682Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0268a45cb6867f331dc457f17d2b30a94d3ed6e0096e2b4f24e3cf7bcab18d7e,PodSandboxId:30ccdc4107ffbdfae1ae76b136f0631fd2be267d12e6762906b0e182cce7016d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701892863377223159,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6883ede-d439-42a2-93aa-a5fa9e2734c6,},Annotations:map[string]string{io.kubernetes.container.hash: 245502d1,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0de730d3d80f9b4ccda3a4a263a0af4eec2fc190737aea02ccac69353cf5d242,PodSandboxId:90bef1ca16b739842aa13359c92662832704dc8e2f0b166127372ed39b72cf7d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1701892862594429591,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wvqmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8ae872e-3784-4fcc-a09c-82c56b3fcc05,},Annotations:map[string]string{io.kubernetes.container.hash: f328273f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff3e0be26327f950f977a92038b6268aa4d4d147690d95151432e4212fdef94f,PodSandboxId:4ecead5f9543561f96015c444968c59eac4cb0b0fadbc1785686392f9aa7f6a5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1701892860700018447,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-2nncf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6deb121-7406-4c9b-be7d-45b8b927c633,},Annotations:map[string]string{io.kubernetes.container.hash: e8af29be,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c383b9ccb2a1831725c86c7081f7006f905d6a2056c6479970649187f93acf2,PodSandboxId:8ef2aca28417874f8b1d6f5e7846c09e7d09bdbfca9bcc1dd4d7a81ca52d8c7e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1701892836051407080,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-448851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a73d7c6d9532e36b67d907cf5d7d0492,},Annotations:map[s
tring]string{io.kubernetes.container.hash: f72f8c18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19e2a17fb2cb9ca9163abd44515140e0be53b2f15eef72c2e2e872a93d767ddd,PodSandboxId:2cc7c0d14124e247d1439e8b1dfd26e9d280ad73e50f1a577085e6157254500c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1701892834836147321,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-448851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06212ee2a32f77acb29faf0fc6feca3a8a3c3d0820299d33947df28671af3a53,PodSandboxId:a88c3a3d24e686bd69ba1ad4b03a49872a0dd7c4453d3ba719f36db9d66883d1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1701892834512278739,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-448851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a03d08bf855b9a5126a45dae7bafe41cf417a67c53c8573269c73979be322e4,PodSandboxId:070958b68242361d0e12fc2f0ba283bde3e8d48cc14fe02e1b8393153e05b8d4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1701892833805241536,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-448851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 198a53bc90d7f2fd0cd5ce4edbeef394,},Annotations:ma
p[string]string{io.kubernetes.container.hash: e7fa7d16,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46fe8c39d7ac63062240fa759515c05e9906abeb3581d184b7701d2441104a69,PodSandboxId:070958b68242361d0e12fc2f0ba283bde3e8d48cc14fe02e1b8393153e05b8d4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_EXITED,CreatedAt:1701892527069646128,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-448851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 198a53bc90d7f2fd0cd5ce4edbeef394,},Annotations:map[string]str
ing{io.kubernetes.container.hash: e7fa7d16,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7f8a9b6d-79b5-46a4-9831-8c312df2633c name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0268a45cb6867       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   30ccdc4107ffb       storage-provisioner
	0de730d3d80f9       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384   9 minutes ago       Running             kube-proxy                0                   90bef1ca16b73       kube-proxy-wvqmw
	ff3e0be26327f       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b   9 minutes ago       Running             coredns                   0                   4ecead5f95435       coredns-5644d7b6d9-2nncf
	0c383b9ccb2a1       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed   10 minutes ago      Running             etcd                      0                   8ef2aca284178       etcd-old-k8s-version-448851
	19e2a17fb2cb9       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a   10 minutes ago      Running             kube-scheduler            0                   2cc7c0d14124e       kube-scheduler-old-k8s-version-448851
	06212ee2a32f7       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d   10 minutes ago      Running             kube-controller-manager   0                   a88c3a3d24e68       kube-controller-manager-old-k8s-version-448851
	4a03d08bf855b       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   10 minutes ago      Running             kube-apiserver            1                   070958b682423       kube-apiserver-old-k8s-version-448851
	46fe8c39d7ac6       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   15 minutes ago      Exited              kube-apiserver            0                   070958b682423       kube-apiserver-old-k8s-version-448851
	
	* 
	* ==> coredns [ff3e0be26327f950f977a92038b6268aa4d4d147690d95151432e4212fdef94f] <==
	* .:53
	2023-12-06T20:01:01.619Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
	2023-12-06T20:01:01.648Z [INFO] CoreDNS-1.6.2
	2023-12-06T20:01:01.648Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	[INFO] Reloading
	2023-12-06T20:01:39.076Z [INFO] plugin/reload: Running configuration MD5 = 7bc8613a521eb1a1737fc3e7c0fea3ca
	[INFO] Reloading complete
	2023-12-06T20:01:39.105Z [INFO] 127.0.0.1:50903 - 46455 "HINFO IN 7909166905492929656.2414890882460254701. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.029157112s
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-448851
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-448851
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=31a3600ce72029d920a55140bbc6d0705e357503
	                    minikube.k8s.io/name=old-k8s-version-448851
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_06T20_00_45_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 06 Dec 2023 20:00:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 06 Dec 2023 20:10:40 +0000   Wed, 06 Dec 2023 20:00:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 06 Dec 2023 20:10:40 +0000   Wed, 06 Dec 2023 20:00:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 06 Dec 2023 20:10:40 +0000   Wed, 06 Dec 2023 20:00:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 06 Dec 2023 20:10:40 +0000   Wed, 06 Dec 2023 20:00:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.33
	  Hostname:    old-k8s-version-448851
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 aa71c7e30b1142b693698088426cb1d6
	 System UUID:                aa71c7e3-0b11-42b6-9369-8088426cb1d6
	 Boot ID:                    329ce5de-4216-4673-8fb1-de5942212a26
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (8 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-2nncf                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                etcd-old-k8s-version-448851                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m13s
	  kube-system                kube-apiserver-old-k8s-version-448851             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m22s
	  kube-system                kube-controller-manager-old-k8s-version-448851    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m18s
	  kube-system                kube-proxy-wvqmw                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                kube-scheduler-old-k8s-version-448851             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m13s
	  kube-system                metrics-server-74d5856cc6-tgtlm                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m57s
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                Message
	  ----    ------                   ----               ----                                -------
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet, old-k8s-version-448851     Node old-k8s-version-448851 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x7 over 10m)  kubelet, old-k8s-version-448851     Node old-k8s-version-448851 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x8 over 10m)  kubelet, old-k8s-version-448851     Node old-k8s-version-448851 status is now: NodeHasSufficientPID
	  Normal  Starting                 9m57s              kube-proxy, old-k8s-version-448851  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Dec 6 19:54] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.067492] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.360665] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.465418] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.149930] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.509408] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Dec 6 19:55] systemd-fstab-generator[637]: Ignoring "noauto" for root device
	[  +0.103328] systemd-fstab-generator[648]: Ignoring "noauto" for root device
	[  +0.142685] systemd-fstab-generator[661]: Ignoring "noauto" for root device
	[  +0.114670] systemd-fstab-generator[672]: Ignoring "noauto" for root device
	[  +0.230580] systemd-fstab-generator[696]: Ignoring "noauto" for root device
	[ +19.942163] systemd-fstab-generator[1027]: Ignoring "noauto" for root device
	[  +0.598795] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[ +25.635722] kauditd_printk_skb: 13 callbacks suppressed
	[Dec 6 19:56] kauditd_printk_skb: 4 callbacks suppressed
	[Dec 6 20:00] systemd-fstab-generator[3089]: Ignoring "noauto" for root device
	[  +1.460595] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 6 20:01] kauditd_printk_skb: 11 callbacks suppressed
	
	* 
	* ==> etcd [0c383b9ccb2a1831725c86c7081f7006f905d6a2056c6479970649187f93acf2] <==
	* 2023-12-06 20:00:36.185641 I | raft: newRaft 8213be6a1edaaef2 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2023-12-06 20:00:36.185665 I | raft: 8213be6a1edaaef2 became follower at term 1
	2023-12-06 20:00:36.195363 W | auth: simple token is not cryptographically signed
	2023-12-06 20:00:36.201092 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2023-12-06 20:00:36.202358 I | etcdserver: 8213be6a1edaaef2 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-12-06 20:00:36.202918 I | etcdserver/membership: added member 8213be6a1edaaef2 [https://192.168.61.33:2380] to cluster 57e911bf31e05932
	2023-12-06 20:00:36.204434 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-12-06 20:00:36.204700 I | embed: listening for metrics on http://192.168.61.33:2381
	2023-12-06 20:00:36.204788 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-12-06 20:00:36.286287 I | raft: 8213be6a1edaaef2 is starting a new election at term 1
	2023-12-06 20:00:36.286375 I | raft: 8213be6a1edaaef2 became candidate at term 2
	2023-12-06 20:00:36.286400 I | raft: 8213be6a1edaaef2 received MsgVoteResp from 8213be6a1edaaef2 at term 2
	2023-12-06 20:00:36.286454 I | raft: 8213be6a1edaaef2 became leader at term 2
	2023-12-06 20:00:36.286486 I | raft: raft.node: 8213be6a1edaaef2 elected leader 8213be6a1edaaef2 at term 2
	2023-12-06 20:00:36.286787 I | etcdserver: setting up the initial cluster version to 3.3
	2023-12-06 20:00:36.288305 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-12-06 20:00:36.288381 I | etcdserver/api: enabled capabilities for version 3.3
	2023-12-06 20:00:36.288407 I | etcdserver: published {Name:old-k8s-version-448851 ClientURLs:[https://192.168.61.33:2379]} to cluster 57e911bf31e05932
	2023-12-06 20:00:36.288423 I | embed: ready to serve client requests
	2023-12-06 20:00:36.288968 I | embed: ready to serve client requests
	2023-12-06 20:00:36.289770 I | embed: serving client requests on 192.168.61.33:2379
	2023-12-06 20:00:36.292056 I | embed: serving client requests on 127.0.0.1:2379
	2023-12-06 20:01:01.307261 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-5644d7b6d9-2nncf\" " with result "range_response_count:1 size:1694" took too long (457.280095ms) to execute
	2023-12-06 20:10:36.905649 I | mvcc: store.index: compact 669
	2023-12-06 20:10:36.907736 I | mvcc: finished scheduled compaction at 669 (took 1.554577ms)
	
	* 
	* ==> kernel <==
	*  20:11:00 up 16 min,  0 users,  load average: 0.06, 0.22, 0.27
	Linux old-k8s-version-448851 5.10.57 #1 SMP Fri Dec 1 04:24:04 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [46fe8c39d7ac63062240fa759515c05e9906abeb3581d184b7701d2441104a69] <==
	* W1206 20:00:31.063987       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1206 20:00:31.065884       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1206 20:00:31.075503       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1206 20:00:31.091137       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1206 20:00:31.119253       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1206 20:00:31.119991       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1206 20:00:31.137497       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1206 20:00:31.139043       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1206 20:00:31.149457       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1206 20:00:31.151409       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1206 20:00:31.178915       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1206 20:00:31.191732       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1206 20:00:31.203659       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1206 20:00:31.220000       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1206 20:00:31.229563       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1206 20:00:31.233321       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1206 20:00:31.236734       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1206 20:00:31.238091       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1206 20:00:31.261303       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1206 20:00:31.268956       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1206 20:00:31.272926       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1206 20:00:31.285213       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1206 20:00:31.301365       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1206 20:00:31.302007       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1206 20:00:31.306264       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	* 
	* ==> kube-apiserver [4a03d08bf855b9a5126a45dae7bafe41cf417a67c53c8573269c73979be322e4] <==
	* I1206 20:04:03.933397       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1206 20:04:03.933785       1 handler_proxy.go:99] no RequestInfo found in the context
	E1206 20:04:03.933994       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1206 20:04:03.934009       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1206 20:05:41.221441       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1206 20:05:41.221600       1 handler_proxy.go:99] no RequestInfo found in the context
	E1206 20:05:41.221686       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1206 20:05:41.221714       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1206 20:06:41.222238       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1206 20:06:41.222531       1 handler_proxy.go:99] no RequestInfo found in the context
	E1206 20:06:41.222676       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1206 20:06:41.222689       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1206 20:08:41.223273       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1206 20:08:41.223423       1 handler_proxy.go:99] no RequestInfo found in the context
	E1206 20:08:41.223478       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1206 20:08:41.223495       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1206 20:10:41.226268       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1206 20:10:41.226714       1 handler_proxy.go:99] no RequestInfo found in the context
	E1206 20:10:41.226939       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1206 20:10:41.226990       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [06212ee2a32f77acb29faf0fc6feca3a8a3c3d0820299d33947df28671af3a53] <==
	* E1206 20:04:31.896078       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1206 20:04:44.892219       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1206 20:05:02.148447       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1206 20:05:16.894421       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1206 20:05:32.400759       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1206 20:05:48.897130       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1206 20:06:02.653481       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1206 20:06:20.900273       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1206 20:06:32.906168       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1206 20:06:52.902540       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1206 20:07:03.159267       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1206 20:07:24.904669       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1206 20:07:33.411949       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1206 20:07:56.907235       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1206 20:08:03.664199       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1206 20:08:28.910070       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1206 20:08:33.916692       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1206 20:09:00.912664       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1206 20:09:04.169028       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1206 20:09:32.914893       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1206 20:09:34.421787       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E1206 20:10:04.674156       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1206 20:10:04.917979       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1206 20:10:34.925949       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1206 20:10:36.920247       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [0de730d3d80f9b4ccda3a4a263a0af4eec2fc190737aea02ccac69353cf5d242] <==
	* W1206 20:01:03.033490       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I1206 20:01:03.047080       1 node.go:135] Successfully retrieved node IP: 192.168.61.33
	I1206 20:01:03.047237       1 server_others.go:149] Using iptables Proxier.
	I1206 20:01:03.048296       1 server.go:529] Version: v1.16.0
	I1206 20:01:03.051280       1 config.go:131] Starting endpoints config controller
	I1206 20:01:03.052588       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I1206 20:01:03.058026       1 config.go:313] Starting service config controller
	I1206 20:01:03.058163       1 shared_informer.go:197] Waiting for caches to sync for service config
	I1206 20:01:03.156319       1 shared_informer.go:204] Caches are synced for endpoints config 
	I1206 20:01:03.159164       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [19e2a17fb2cb9ca9163abd44515140e0be53b2f15eef72c2e2e872a93d767ddd] <==
	* W1206 20:00:40.275718       1 authentication.go:79] Authentication is disabled
	I1206 20:00:40.275737       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I1206 20:00:40.276338       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E1206 20:00:40.332312       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1206 20:00:40.332531       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1206 20:00:40.345371       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1206 20:00:40.345488       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1206 20:00:40.345601       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1206 20:00:40.345950       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1206 20:00:40.346232       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1206 20:00:40.346363       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1206 20:00:40.346370       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1206 20:00:40.347131       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1206 20:00:40.348993       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1206 20:00:41.336311       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1206 20:00:41.357099       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1206 20:00:41.357705       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1206 20:00:41.359217       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1206 20:00:41.360931       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1206 20:00:41.361007       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1206 20:00:41.361051       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1206 20:00:41.361116       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1206 20:00:41.361146       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1206 20:00:41.361390       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1206 20:00:41.361924       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-12-06 19:54:55 UTC, ends at Wed 2023-12-06 20:11:01 UTC. --
	Dec 06 20:06:27 old-k8s-version-448851 kubelet[3106]: E1206 20:06:27.308375    3106 pod_workers.go:191] Error syncing pod 8a7743ff-40fa-4587-ae70-7517aae53c65 ("metrics-server-74d5856cc6-tgtlm_kube-system(8a7743ff-40fa-4587-ae70-7517aae53c65)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 06 20:06:42 old-k8s-version-448851 kubelet[3106]: E1206 20:06:42.307407    3106 pod_workers.go:191] Error syncing pod 8a7743ff-40fa-4587-ae70-7517aae53c65 ("metrics-server-74d5856cc6-tgtlm_kube-system(8a7743ff-40fa-4587-ae70-7517aae53c65)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 06 20:06:55 old-k8s-version-448851 kubelet[3106]: E1206 20:06:55.318715    3106 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Dec 06 20:06:55 old-k8s-version-448851 kubelet[3106]: E1206 20:06:55.318778    3106 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Dec 06 20:06:55 old-k8s-version-448851 kubelet[3106]: E1206 20:06:55.318883    3106 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Dec 06 20:06:55 old-k8s-version-448851 kubelet[3106]: E1206 20:06:55.318917    3106 pod_workers.go:191] Error syncing pod 8a7743ff-40fa-4587-ae70-7517aae53c65 ("metrics-server-74d5856cc6-tgtlm_kube-system(8a7743ff-40fa-4587-ae70-7517aae53c65)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Dec 06 20:07:07 old-k8s-version-448851 kubelet[3106]: E1206 20:07:07.308857    3106 pod_workers.go:191] Error syncing pod 8a7743ff-40fa-4587-ae70-7517aae53c65 ("metrics-server-74d5856cc6-tgtlm_kube-system(8a7743ff-40fa-4587-ae70-7517aae53c65)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 06 20:07:19 old-k8s-version-448851 kubelet[3106]: E1206 20:07:19.310692    3106 pod_workers.go:191] Error syncing pod 8a7743ff-40fa-4587-ae70-7517aae53c65 ("metrics-server-74d5856cc6-tgtlm_kube-system(8a7743ff-40fa-4587-ae70-7517aae53c65)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 06 20:07:31 old-k8s-version-448851 kubelet[3106]: E1206 20:07:31.307341    3106 pod_workers.go:191] Error syncing pod 8a7743ff-40fa-4587-ae70-7517aae53c65 ("metrics-server-74d5856cc6-tgtlm_kube-system(8a7743ff-40fa-4587-ae70-7517aae53c65)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 06 20:07:45 old-k8s-version-448851 kubelet[3106]: E1206 20:07:45.307516    3106 pod_workers.go:191] Error syncing pod 8a7743ff-40fa-4587-ae70-7517aae53c65 ("metrics-server-74d5856cc6-tgtlm_kube-system(8a7743ff-40fa-4587-ae70-7517aae53c65)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 06 20:08:00 old-k8s-version-448851 kubelet[3106]: E1206 20:08:00.307585    3106 pod_workers.go:191] Error syncing pod 8a7743ff-40fa-4587-ae70-7517aae53c65 ("metrics-server-74d5856cc6-tgtlm_kube-system(8a7743ff-40fa-4587-ae70-7517aae53c65)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 06 20:08:15 old-k8s-version-448851 kubelet[3106]: E1206 20:08:15.314878    3106 pod_workers.go:191] Error syncing pod 8a7743ff-40fa-4587-ae70-7517aae53c65 ("metrics-server-74d5856cc6-tgtlm_kube-system(8a7743ff-40fa-4587-ae70-7517aae53c65)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 06 20:08:29 old-k8s-version-448851 kubelet[3106]: E1206 20:08:29.308325    3106 pod_workers.go:191] Error syncing pod 8a7743ff-40fa-4587-ae70-7517aae53c65 ("metrics-server-74d5856cc6-tgtlm_kube-system(8a7743ff-40fa-4587-ae70-7517aae53c65)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 06 20:08:44 old-k8s-version-448851 kubelet[3106]: E1206 20:08:44.307347    3106 pod_workers.go:191] Error syncing pod 8a7743ff-40fa-4587-ae70-7517aae53c65 ("metrics-server-74d5856cc6-tgtlm_kube-system(8a7743ff-40fa-4587-ae70-7517aae53c65)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 06 20:08:57 old-k8s-version-448851 kubelet[3106]: E1206 20:08:57.307616    3106 pod_workers.go:191] Error syncing pod 8a7743ff-40fa-4587-ae70-7517aae53c65 ("metrics-server-74d5856cc6-tgtlm_kube-system(8a7743ff-40fa-4587-ae70-7517aae53c65)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 06 20:09:08 old-k8s-version-448851 kubelet[3106]: E1206 20:09:08.307200    3106 pod_workers.go:191] Error syncing pod 8a7743ff-40fa-4587-ae70-7517aae53c65 ("metrics-server-74d5856cc6-tgtlm_kube-system(8a7743ff-40fa-4587-ae70-7517aae53c65)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 06 20:09:23 old-k8s-version-448851 kubelet[3106]: E1206 20:09:23.309187    3106 pod_workers.go:191] Error syncing pod 8a7743ff-40fa-4587-ae70-7517aae53c65 ("metrics-server-74d5856cc6-tgtlm_kube-system(8a7743ff-40fa-4587-ae70-7517aae53c65)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 06 20:09:35 old-k8s-version-448851 kubelet[3106]: E1206 20:09:35.308409    3106 pod_workers.go:191] Error syncing pod 8a7743ff-40fa-4587-ae70-7517aae53c65 ("metrics-server-74d5856cc6-tgtlm_kube-system(8a7743ff-40fa-4587-ae70-7517aae53c65)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 06 20:09:47 old-k8s-version-448851 kubelet[3106]: E1206 20:09:47.308516    3106 pod_workers.go:191] Error syncing pod 8a7743ff-40fa-4587-ae70-7517aae53c65 ("metrics-server-74d5856cc6-tgtlm_kube-system(8a7743ff-40fa-4587-ae70-7517aae53c65)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 06 20:09:59 old-k8s-version-448851 kubelet[3106]: E1206 20:09:59.308104    3106 pod_workers.go:191] Error syncing pod 8a7743ff-40fa-4587-ae70-7517aae53c65 ("metrics-server-74d5856cc6-tgtlm_kube-system(8a7743ff-40fa-4587-ae70-7517aae53c65)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 06 20:10:14 old-k8s-version-448851 kubelet[3106]: E1206 20:10:14.308347    3106 pod_workers.go:191] Error syncing pod 8a7743ff-40fa-4587-ae70-7517aae53c65 ("metrics-server-74d5856cc6-tgtlm_kube-system(8a7743ff-40fa-4587-ae70-7517aae53c65)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 06 20:10:26 old-k8s-version-448851 kubelet[3106]: E1206 20:10:26.307527    3106 pod_workers.go:191] Error syncing pod 8a7743ff-40fa-4587-ae70-7517aae53c65 ("metrics-server-74d5856cc6-tgtlm_kube-system(8a7743ff-40fa-4587-ae70-7517aae53c65)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 06 20:10:33 old-k8s-version-448851 kubelet[3106]: E1206 20:10:33.395087    3106 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Dec 06 20:10:41 old-k8s-version-448851 kubelet[3106]: E1206 20:10:41.308242    3106 pod_workers.go:191] Error syncing pod 8a7743ff-40fa-4587-ae70-7517aae53c65 ("metrics-server-74d5856cc6-tgtlm_kube-system(8a7743ff-40fa-4587-ae70-7517aae53c65)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 06 20:10:52 old-k8s-version-448851 kubelet[3106]: E1206 20:10:52.307641    3106 pod_workers.go:191] Error syncing pod 8a7743ff-40fa-4587-ae70-7517aae53c65 ("metrics-server-74d5856cc6-tgtlm_kube-system(8a7743ff-40fa-4587-ae70-7517aae53c65)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	* 
	* ==> storage-provisioner [0268a45cb6867f331dc457f17d2b30a94d3ed6e0096e2b4f24e3cf7bcab18d7e] <==
	* I1206 20:01:03.529298       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1206 20:01:03.546125       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1206 20:01:03.546257       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1206 20:01:03.565940       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1206 20:01:03.566286       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-448851_54eb4b2d-3290-45ce-b3f4-ff1907c8baa1!
	I1206 20:01:03.572036       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5ef07ee0-ed24-473c-aea3-e7b6e1797ad9", APIVersion:"v1", ResourceVersion:"422", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-448851_54eb4b2d-3290-45ce-b3f4-ff1907c8baa1 became leader
	I1206 20:01:03.667442       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-448851_54eb4b2d-3290-45ce-b3f4-ff1907c8baa1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-448851 -n old-k8s-version-448851
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-448851 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-tgtlm
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-448851 describe pod metrics-server-74d5856cc6-tgtlm
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-448851 describe pod metrics-server-74d5856cc6-tgtlm: exit status 1 (69.327373ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-tgtlm" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-448851 describe pod metrics-server-74d5856cc6-tgtlm: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (427.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-380424 -n default-k8s-diff-port-380424
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-12-06 20:16:59.972379497 +0000 UTC m=+5794.056873317
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-380424 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-380424 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.538µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-380424 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-380424 -n default-k8s-diff-port-380424
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-380424 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-380424 logs -n 25: (1.228073286s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p bridge-459609                                       | bridge-459609                | jenkins | v1.32.0 | 06 Dec 23 19:46 UTC | 06 Dec 23 19:46 UTC |
	| delete  | -p                                                     | disable-driver-mounts-730405 | jenkins | v1.32.0 | 06 Dec 23 19:46 UTC | 06 Dec 23 19:46 UTC |
	|         | disable-driver-mounts-730405                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-380424 | jenkins | v1.32.0 | 06 Dec 23 19:46 UTC | 06 Dec 23 19:48 UTC |
	|         | default-k8s-diff-port-380424                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-989559             | no-preload-989559            | jenkins | v1.32.0 | 06 Dec 23 19:47 UTC | 06 Dec 23 19:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-989559                                   | no-preload-989559            | jenkins | v1.32.0 | 06 Dec 23 19:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-448851        | old-k8s-version-448851       | jenkins | v1.32.0 | 06 Dec 23 19:47 UTC | 06 Dec 23 19:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-448851                              | old-k8s-version-448851       | jenkins | v1.32.0 | 06 Dec 23 19:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-380424  | default-k8s-diff-port-380424 | jenkins | v1.32.0 | 06 Dec 23 19:48 UTC | 06 Dec 23 19:48 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-380424 | jenkins | v1.32.0 | 06 Dec 23 19:48 UTC |                     |
	|         | default-k8s-diff-port-380424                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-209025            | embed-certs-209025           | jenkins | v1.32.0 | 06 Dec 23 19:48 UTC | 06 Dec 23 19:48 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-209025                                  | embed-certs-209025           | jenkins | v1.32.0 | 06 Dec 23 19:48 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-989559                  | no-preload-989559            | jenkins | v1.32.0 | 06 Dec 23 19:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-989559                                   | no-preload-989559            | jenkins | v1.32.0 | 06 Dec 23 19:50 UTC | 06 Dec 23 20:01 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.1                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-448851             | old-k8s-version-448851       | jenkins | v1.32.0 | 06 Dec 23 19:50 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-448851                              | old-k8s-version-448851       | jenkins | v1.32.0 | 06 Dec 23 19:50 UTC | 06 Dec 23 20:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-380424       | default-k8s-diff-port-380424 | jenkins | v1.32.0 | 06 Dec 23 19:50 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-209025                 | embed-certs-209025           | jenkins | v1.32.0 | 06 Dec 23 19:50 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-380424 | jenkins | v1.32.0 | 06 Dec 23 19:50 UTC | 06 Dec 23 20:00 UTC |
	|         | default-k8s-diff-port-380424                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-209025                                  | embed-certs-209025           | jenkins | v1.32.0 | 06 Dec 23 19:50 UTC | 06 Dec 23 20:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-448851                              | old-k8s-version-448851       | jenkins | v1.32.0 | 06 Dec 23 20:15 UTC | 06 Dec 23 20:15 UTC |
	| start   | -p newest-cni-347168 --memory=2200 --alsologtostderr   | newest-cni-347168            | jenkins | v1.32.0 | 06 Dec 23 20:15 UTC | 06 Dec 23 20:16 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.1                      |                              |         |         |                     |                     |
	| delete  | -p no-preload-989559                                   | no-preload-989559            | jenkins | v1.32.0 | 06 Dec 23 20:15 UTC | 06 Dec 23 20:15 UTC |
	| addons  | enable metrics-server -p newest-cni-347168             | newest-cni-347168            | jenkins | v1.32.0 | 06 Dec 23 20:16 UTC | 06 Dec 23 20:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-347168                                   | newest-cni-347168            | jenkins | v1.32.0 | 06 Dec 23 20:16 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-209025                                  | embed-certs-209025           | jenkins | v1.32.0 | 06 Dec 23 20:16 UTC | 06 Dec 23 20:16 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/06 20:15:09
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 20:15:09.805224  120996 out.go:296] Setting OutFile to fd 1 ...
	I1206 20:15:09.805509  120996 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 20:15:09.805520  120996 out.go:309] Setting ErrFile to fd 2...
	I1206 20:15:09.805524  120996 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 20:15:09.805720  120996 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17740-63652/.minikube/bin
	I1206 20:15:09.806348  120996 out.go:303] Setting JSON to false
	I1206 20:15:09.807270  120996 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":10660,"bootTime":1701883050,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 20:15:09.807333  120996 start.go:138] virtualization: kvm guest
	I1206 20:15:09.809854  120996 out.go:177] * [newest-cni-347168] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1206 20:15:09.811393  120996 out.go:177]   - MINIKUBE_LOCATION=17740
	I1206 20:15:09.812932  120996 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 20:15:09.811424  120996 notify.go:220] Checking for updates...
	I1206 20:15:09.815815  120996 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17740-63652/kubeconfig
	I1206 20:15:09.817403  120996 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17740-63652/.minikube
	I1206 20:15:09.818874  120996 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 20:15:09.820369  120996 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 20:15:09.822395  120996 config.go:182] Loaded profile config "default-k8s-diff-port-380424": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 20:15:09.822498  120996 config.go:182] Loaded profile config "embed-certs-209025": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 20:15:09.822603  120996 config.go:182] Loaded profile config "no-preload-989559": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.1
	I1206 20:15:09.822725  120996 driver.go:392] Setting default libvirt URI to qemu:///system
	I1206 20:15:09.861615  120996 out.go:177] * Using the kvm2 driver based on user configuration
	I1206 20:15:09.863332  120996 start.go:298] selected driver: kvm2
	I1206 20:15:09.863353  120996 start.go:902] validating driver "kvm2" against <nil>
	I1206 20:15:09.863380  120996 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 20:15:09.864102  120996 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 20:15:09.864195  120996 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17740-63652/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1206 20:15:09.879735  120996 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1206 20:15:09.879783  120996 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	W1206 20:15:09.879805  120996 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1206 20:15:09.880097  120996 start_flags.go:950] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1206 20:15:09.880183  120996 cni.go:84] Creating CNI manager for ""
	I1206 20:15:09.880204  120996 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 20:15:09.880226  120996 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1206 20:15:09.880242  120996 start_flags.go:323] config:
	{Name:newest-cni-347168 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.1 ClusterName:newest-cni-347168 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 20:15:09.880418  120996 iso.go:125] acquiring lock: {Name:mk6e9c7dc90243dab7d2a6f322b4b6abe4dff6ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 20:15:09.882883  120996 out.go:177] * Starting control plane node newest-cni-347168 in cluster newest-cni-347168
	I1206 20:15:09.884341  120996 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime crio
	I1206 20:15:09.884386  120996 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I1206 20:15:09.884401  120996 cache.go:56] Caching tarball of preloaded images
	I1206 20:15:09.884535  120996 preload.go:174] Found /home/jenkins/minikube-integration/17740-63652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1206 20:15:09.884549  120996 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.1 on crio
	I1206 20:15:09.884667  120996 profile.go:148] Saving config to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/config.json ...
	I1206 20:15:09.884703  120996 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/config.json: {Name:mkc51a1c7ccc2567aa83707a3b832218332d0cac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 20:15:09.884894  120996 start.go:365] acquiring machines lock for newest-cni-347168: {Name:mk49ce640266d8c664a871ed4989f65c26b6fae1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1206 20:15:09.884933  120996 start.go:369] acquired machines lock for "newest-cni-347168" in 22.74µs
	I1206 20:15:09.884956  120996 start.go:93] Provisioning new machine with config: &{Name:newest-cni-347168 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.29.0-rc.1 ClusterName:newest-cni-347168 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 20:15:09.885048  120996 start.go:125] createHost starting for "" (driver="kvm2")
	I1206 20:15:09.886939  120996 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1206 20:15:09.887110  120996 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:15:09.887163  120996 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:15:09.902685  120996 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34801
	I1206 20:15:09.903118  120996 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:15:09.903749  120996 main.go:141] libmachine: Using API Version  1
	I1206 20:15:09.903771  120996 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:15:09.904154  120996 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:15:09.904366  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetMachineName
	I1206 20:15:09.904499  120996 main.go:141] libmachine: (newest-cni-347168) Calling .DriverName
	I1206 20:15:09.904692  120996 start.go:159] libmachine.API.Create for "newest-cni-347168" (driver="kvm2")
	I1206 20:15:09.904762  120996 client.go:168] LocalClient.Create starting
	I1206 20:15:09.904828  120996 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem
	I1206 20:15:09.904862  120996 main.go:141] libmachine: Decoding PEM data...
	I1206 20:15:09.904880  120996 main.go:141] libmachine: Parsing certificate...
	I1206 20:15:09.904944  120996 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem
	I1206 20:15:09.904961  120996 main.go:141] libmachine: Decoding PEM data...
	I1206 20:15:09.904976  120996 main.go:141] libmachine: Parsing certificate...
	I1206 20:15:09.904993  120996 main.go:141] libmachine: Running pre-create checks...
	I1206 20:15:09.905007  120996 main.go:141] libmachine: (newest-cni-347168) Calling .PreCreateCheck
	I1206 20:15:09.905441  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetConfigRaw
	I1206 20:15:09.905904  120996 main.go:141] libmachine: Creating machine...
	I1206 20:15:09.905926  120996 main.go:141] libmachine: (newest-cni-347168) Calling .Create
	I1206 20:15:09.906160  120996 main.go:141] libmachine: (newest-cni-347168) Creating KVM machine...
	I1206 20:15:09.907558  120996 main.go:141] libmachine: (newest-cni-347168) DBG | found existing default KVM network
	I1206 20:15:09.908771  120996 main.go:141] libmachine: (newest-cni-347168) DBG | I1206 20:15:09.908571  121019 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:6a:15:65} reservation:<nil>}
	I1206 20:15:09.909652  120996 main.go:141] libmachine: (newest-cni-347168) DBG | I1206 20:15:09.909565  121019 network.go:214] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:d1:51:aa} reservation:<nil>}
	I1206 20:15:09.910815  120996 main.go:141] libmachine: (newest-cni-347168) DBG | I1206 20:15:09.910704  121019 network.go:209] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001fcfb0}
	I1206 20:15:09.916826  120996 main.go:141] libmachine: (newest-cni-347168) DBG | trying to create private KVM network mk-newest-cni-347168 192.168.61.0/24...
	I1206 20:15:10.001011  120996 main.go:141] libmachine: (newest-cni-347168) DBG | private KVM network mk-newest-cni-347168 192.168.61.0/24 created
	I1206 20:15:10.001053  120996 main.go:141] libmachine: (newest-cni-347168) DBG | I1206 20:15:10.000937  121019 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17740-63652/.minikube
	I1206 20:15:10.001072  120996 main.go:141] libmachine: (newest-cni-347168) Setting up store path in /home/jenkins/minikube-integration/17740-63652/.minikube/machines/newest-cni-347168 ...
	I1206 20:15:10.001125  120996 main.go:141] libmachine: (newest-cni-347168) Building disk image from file:///home/jenkins/minikube-integration/17740-63652/.minikube/cache/iso/amd64/minikube-v1.32.1-1701387192-17703-amd64.iso
	I1206 20:15:10.001177  120996 main.go:141] libmachine: (newest-cni-347168) Downloading /home/jenkins/minikube-integration/17740-63652/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17740-63652/.minikube/cache/iso/amd64/minikube-v1.32.1-1701387192-17703-amd64.iso...
	I1206 20:15:10.243016  120996 main.go:141] libmachine: (newest-cni-347168) DBG | I1206 20:15:10.242863  121019 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/newest-cni-347168/id_rsa...
	I1206 20:15:10.293758  120996 main.go:141] libmachine: (newest-cni-347168) DBG | I1206 20:15:10.293630  121019 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/newest-cni-347168/newest-cni-347168.rawdisk...
	I1206 20:15:10.293791  120996 main.go:141] libmachine: (newest-cni-347168) DBG | Writing magic tar header
	I1206 20:15:10.293805  120996 main.go:141] libmachine: (newest-cni-347168) DBG | Writing SSH key tar header
	I1206 20:15:10.293814  120996 main.go:141] libmachine: (newest-cni-347168) DBG | I1206 20:15:10.293781  121019 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17740-63652/.minikube/machines/newest-cni-347168 ...
	I1206 20:15:10.293940  120996 main.go:141] libmachine: (newest-cni-347168) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/newest-cni-347168
	I1206 20:15:10.293981  120996 main.go:141] libmachine: (newest-cni-347168) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17740-63652/.minikube/machines
	I1206 20:15:10.293999  120996 main.go:141] libmachine: (newest-cni-347168) Setting executable bit set on /home/jenkins/minikube-integration/17740-63652/.minikube/machines/newest-cni-347168 (perms=drwx------)
	I1206 20:15:10.294014  120996 main.go:141] libmachine: (newest-cni-347168) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17740-63652/.minikube
	I1206 20:15:10.294031  120996 main.go:141] libmachine: (newest-cni-347168) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17740-63652
	I1206 20:15:10.294057  120996 main.go:141] libmachine: (newest-cni-347168) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1206 20:15:10.294074  120996 main.go:141] libmachine: (newest-cni-347168) DBG | Checking permissions on dir: /home/jenkins
	I1206 20:15:10.294090  120996 main.go:141] libmachine: (newest-cni-347168) Setting executable bit set on /home/jenkins/minikube-integration/17740-63652/.minikube/machines (perms=drwxr-xr-x)
	I1206 20:15:10.294110  120996 main.go:141] libmachine: (newest-cni-347168) Setting executable bit set on /home/jenkins/minikube-integration/17740-63652/.minikube (perms=drwxr-xr-x)
	I1206 20:15:10.294124  120996 main.go:141] libmachine: (newest-cni-347168) Setting executable bit set on /home/jenkins/minikube-integration/17740-63652 (perms=drwxrwxr-x)
	I1206 20:15:10.294139  120996 main.go:141] libmachine: (newest-cni-347168) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1206 20:15:10.294151  120996 main.go:141] libmachine: (newest-cni-347168) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1206 20:15:10.294165  120996 main.go:141] libmachine: (newest-cni-347168) DBG | Checking permissions on dir: /home
	I1206 20:15:10.294177  120996 main.go:141] libmachine: (newest-cni-347168) DBG | Skipping /home - not owner
	I1206 20:15:10.294190  120996 main.go:141] libmachine: (newest-cni-347168) Creating domain...
	I1206 20:15:10.295484  120996 main.go:141] libmachine: (newest-cni-347168) define libvirt domain using xml: 
	I1206 20:15:10.295514  120996 main.go:141] libmachine: (newest-cni-347168) <domain type='kvm'>
	I1206 20:15:10.295523  120996 main.go:141] libmachine: (newest-cni-347168)   <name>newest-cni-347168</name>
	I1206 20:15:10.295529  120996 main.go:141] libmachine: (newest-cni-347168)   <memory unit='MiB'>2200</memory>
	I1206 20:15:10.295535  120996 main.go:141] libmachine: (newest-cni-347168)   <vcpu>2</vcpu>
	I1206 20:15:10.295540  120996 main.go:141] libmachine: (newest-cni-347168)   <features>
	I1206 20:15:10.295546  120996 main.go:141] libmachine: (newest-cni-347168)     <acpi/>
	I1206 20:15:10.295559  120996 main.go:141] libmachine: (newest-cni-347168)     <apic/>
	I1206 20:15:10.295581  120996 main.go:141] libmachine: (newest-cni-347168)     <pae/>
	I1206 20:15:10.295594  120996 main.go:141] libmachine: (newest-cni-347168)     
	I1206 20:15:10.295603  120996 main.go:141] libmachine: (newest-cni-347168)   </features>
	I1206 20:15:10.295610  120996 main.go:141] libmachine: (newest-cni-347168)   <cpu mode='host-passthrough'>
	I1206 20:15:10.295624  120996 main.go:141] libmachine: (newest-cni-347168)   
	I1206 20:15:10.295634  120996 main.go:141] libmachine: (newest-cni-347168)   </cpu>
	I1206 20:15:10.295666  120996 main.go:141] libmachine: (newest-cni-347168)   <os>
	I1206 20:15:10.295693  120996 main.go:141] libmachine: (newest-cni-347168)     <type>hvm</type>
	I1206 20:15:10.295705  120996 main.go:141] libmachine: (newest-cni-347168)     <boot dev='cdrom'/>
	I1206 20:15:10.295748  120996 main.go:141] libmachine: (newest-cni-347168)     <boot dev='hd'/>
	I1206 20:15:10.295764  120996 main.go:141] libmachine: (newest-cni-347168)     <bootmenu enable='no'/>
	I1206 20:15:10.295788  120996 main.go:141] libmachine: (newest-cni-347168)   </os>
	I1206 20:15:10.295801  120996 main.go:141] libmachine: (newest-cni-347168)   <devices>
	I1206 20:15:10.295815  120996 main.go:141] libmachine: (newest-cni-347168)     <disk type='file' device='cdrom'>
	I1206 20:15:10.295837  120996 main.go:141] libmachine: (newest-cni-347168)       <source file='/home/jenkins/minikube-integration/17740-63652/.minikube/machines/newest-cni-347168/boot2docker.iso'/>
	I1206 20:15:10.295848  120996 main.go:141] libmachine: (newest-cni-347168)       <target dev='hdc' bus='scsi'/>
	I1206 20:15:10.295861  120996 main.go:141] libmachine: (newest-cni-347168)       <readonly/>
	I1206 20:15:10.295872  120996 main.go:141] libmachine: (newest-cni-347168)     </disk>
	I1206 20:15:10.295886  120996 main.go:141] libmachine: (newest-cni-347168)     <disk type='file' device='disk'>
	I1206 20:15:10.295904  120996 main.go:141] libmachine: (newest-cni-347168)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1206 20:15:10.295923  120996 main.go:141] libmachine: (newest-cni-347168)       <source file='/home/jenkins/minikube-integration/17740-63652/.minikube/machines/newest-cni-347168/newest-cni-347168.rawdisk'/>
	I1206 20:15:10.295936  120996 main.go:141] libmachine: (newest-cni-347168)       <target dev='hda' bus='virtio'/>
	I1206 20:15:10.295949  120996 main.go:141] libmachine: (newest-cni-347168)     </disk>
	I1206 20:15:10.295958  120996 main.go:141] libmachine: (newest-cni-347168)     <interface type='network'>
	I1206 20:15:10.295982  120996 main.go:141] libmachine: (newest-cni-347168)       <source network='mk-newest-cni-347168'/>
	I1206 20:15:10.295999  120996 main.go:141] libmachine: (newest-cni-347168)       <model type='virtio'/>
	I1206 20:15:10.296069  120996 main.go:141] libmachine: (newest-cni-347168)     </interface>
	I1206 20:15:10.296096  120996 main.go:141] libmachine: (newest-cni-347168)     <interface type='network'>
	I1206 20:15:10.296114  120996 main.go:141] libmachine: (newest-cni-347168)       <source network='default'/>
	I1206 20:15:10.296123  120996 main.go:141] libmachine: (newest-cni-347168)       <model type='virtio'/>
	I1206 20:15:10.296133  120996 main.go:141] libmachine: (newest-cni-347168)     </interface>
	I1206 20:15:10.296142  120996 main.go:141] libmachine: (newest-cni-347168)     <serial type='pty'>
	I1206 20:15:10.296151  120996 main.go:141] libmachine: (newest-cni-347168)       <target port='0'/>
	I1206 20:15:10.296158  120996 main.go:141] libmachine: (newest-cni-347168)     </serial>
	I1206 20:15:10.296167  120996 main.go:141] libmachine: (newest-cni-347168)     <console type='pty'>
	I1206 20:15:10.296175  120996 main.go:141] libmachine: (newest-cni-347168)       <target type='serial' port='0'/>
	I1206 20:15:10.296184  120996 main.go:141] libmachine: (newest-cni-347168)     </console>
	I1206 20:15:10.296192  120996 main.go:141] libmachine: (newest-cni-347168)     <rng model='virtio'>
	I1206 20:15:10.296204  120996 main.go:141] libmachine: (newest-cni-347168)       <backend model='random'>/dev/random</backend>
	I1206 20:15:10.296211  120996 main.go:141] libmachine: (newest-cni-347168)     </rng>
	I1206 20:15:10.296220  120996 main.go:141] libmachine: (newest-cni-347168)     
	I1206 20:15:10.296234  120996 main.go:141] libmachine: (newest-cni-347168)     
	I1206 20:15:10.296244  120996 main.go:141] libmachine: (newest-cni-347168)   </devices>
	I1206 20:15:10.296252  120996 main.go:141] libmachine: (newest-cni-347168) </domain>
	I1206 20:15:10.296280  120996 main.go:141] libmachine: (newest-cni-347168) 
	I1206 20:15:10.300528  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:7f:92:13 in network default
	I1206 20:15:10.301121  120996 main.go:141] libmachine: (newest-cni-347168) Ensuring networks are active...
	I1206 20:15:10.301154  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:10.301898  120996 main.go:141] libmachine: (newest-cni-347168) Ensuring network default is active
	I1206 20:15:10.302202  120996 main.go:141] libmachine: (newest-cni-347168) Ensuring network mk-newest-cni-347168 is active
	I1206 20:15:10.302641  120996 main.go:141] libmachine: (newest-cni-347168) Getting domain xml...
	I1206 20:15:10.303450  120996 main.go:141] libmachine: (newest-cni-347168) Creating domain...
	I1206 20:15:11.631063  120996 main.go:141] libmachine: (newest-cni-347168) Waiting to get IP...
	I1206 20:15:11.631867  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:11.632488  120996 main.go:141] libmachine: (newest-cni-347168) DBG | unable to find current IP address of domain newest-cni-347168 in network mk-newest-cni-347168
	I1206 20:15:11.632520  120996 main.go:141] libmachine: (newest-cni-347168) DBG | I1206 20:15:11.632443  121019 retry.go:31] will retry after 233.957525ms: waiting for machine to come up
	I1206 20:15:11.867869  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:11.868462  120996 main.go:141] libmachine: (newest-cni-347168) DBG | unable to find current IP address of domain newest-cni-347168 in network mk-newest-cni-347168
	I1206 20:15:11.868491  120996 main.go:141] libmachine: (newest-cni-347168) DBG | I1206 20:15:11.868395  121019 retry.go:31] will retry after 255.274669ms: waiting for machine to come up
	I1206 20:15:12.124876  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:12.125472  120996 main.go:141] libmachine: (newest-cni-347168) DBG | unable to find current IP address of domain newest-cni-347168 in network mk-newest-cni-347168
	I1206 20:15:12.125503  120996 main.go:141] libmachine: (newest-cni-347168) DBG | I1206 20:15:12.125411  121019 retry.go:31] will retry after 349.317013ms: waiting for machine to come up
	I1206 20:15:12.475860  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:12.476566  120996 main.go:141] libmachine: (newest-cni-347168) DBG | unable to find current IP address of domain newest-cni-347168 in network mk-newest-cni-347168
	I1206 20:15:12.476599  120996 main.go:141] libmachine: (newest-cni-347168) DBG | I1206 20:15:12.476497  121019 retry.go:31] will retry after 416.403168ms: waiting for machine to come up
	I1206 20:15:12.894125  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:12.894686  120996 main.go:141] libmachine: (newest-cni-347168) DBG | unable to find current IP address of domain newest-cni-347168 in network mk-newest-cni-347168
	I1206 20:15:12.894709  120996 main.go:141] libmachine: (newest-cni-347168) DBG | I1206 20:15:12.894603  121019 retry.go:31] will retry after 608.573742ms: waiting for machine to come up
	I1206 20:15:13.504176  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:13.504628  120996 main.go:141] libmachine: (newest-cni-347168) DBG | unable to find current IP address of domain newest-cni-347168 in network mk-newest-cni-347168
	I1206 20:15:13.504660  120996 main.go:141] libmachine: (newest-cni-347168) DBG | I1206 20:15:13.504560  121019 retry.go:31] will retry after 646.189699ms: waiting for machine to come up
	I1206 20:15:14.152435  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:14.152802  120996 main.go:141] libmachine: (newest-cni-347168) DBG | unable to find current IP address of domain newest-cni-347168 in network mk-newest-cni-347168
	I1206 20:15:14.152825  120996 main.go:141] libmachine: (newest-cni-347168) DBG | I1206 20:15:14.152756  121019 retry.go:31] will retry after 961.404409ms: waiting for machine to come up
	I1206 20:15:15.115574  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:15.116051  120996 main.go:141] libmachine: (newest-cni-347168) DBG | unable to find current IP address of domain newest-cni-347168 in network mk-newest-cni-347168
	I1206 20:15:15.116073  120996 main.go:141] libmachine: (newest-cni-347168) DBG | I1206 20:15:15.115993  121019 retry.go:31] will retry after 1.329333828s: waiting for machine to come up
	I1206 20:15:16.447315  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:16.447883  120996 main.go:141] libmachine: (newest-cni-347168) DBG | unable to find current IP address of domain newest-cni-347168 in network mk-newest-cni-347168
	I1206 20:15:16.447925  120996 main.go:141] libmachine: (newest-cni-347168) DBG | I1206 20:15:16.447841  121019 retry.go:31] will retry after 1.448183792s: waiting for machine to come up
	I1206 20:15:17.898296  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:17.898794  120996 main.go:141] libmachine: (newest-cni-347168) DBG | unable to find current IP address of domain newest-cni-347168 in network mk-newest-cni-347168
	I1206 20:15:17.898835  120996 main.go:141] libmachine: (newest-cni-347168) DBG | I1206 20:15:17.898770  121019 retry.go:31] will retry after 1.963121871s: waiting for machine to come up
	I1206 20:15:19.863330  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:19.863874  120996 main.go:141] libmachine: (newest-cni-347168) DBG | unable to find current IP address of domain newest-cni-347168 in network mk-newest-cni-347168
	I1206 20:15:19.863907  120996 main.go:141] libmachine: (newest-cni-347168) DBG | I1206 20:15:19.863824  121019 retry.go:31] will retry after 1.863190443s: waiting for machine to come up
	I1206 20:15:21.729550  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:21.730063  120996 main.go:141] libmachine: (newest-cni-347168) DBG | unable to find current IP address of domain newest-cni-347168 in network mk-newest-cni-347168
	I1206 20:15:21.730098  120996 main.go:141] libmachine: (newest-cni-347168) DBG | I1206 20:15:21.730003  121019 retry.go:31] will retry after 3.534433438s: waiting for machine to come up
	I1206 20:15:25.266286  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:25.266770  120996 main.go:141] libmachine: (newest-cni-347168) DBG | unable to find current IP address of domain newest-cni-347168 in network mk-newest-cni-347168
	I1206 20:15:25.266793  120996 main.go:141] libmachine: (newest-cni-347168) DBG | I1206 20:15:25.266731  121019 retry.go:31] will retry after 3.268833182s: waiting for machine to come up
	I1206 20:15:28.538314  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:28.538836  120996 main.go:141] libmachine: (newest-cni-347168) DBG | unable to find current IP address of domain newest-cni-347168 in network mk-newest-cni-347168
	I1206 20:15:28.538866  120996 main.go:141] libmachine: (newest-cni-347168) DBG | I1206 20:15:28.538774  121019 retry.go:31] will retry after 4.552063341s: waiting for machine to come up
	I1206 20:15:33.094236  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:33.094859  120996 main.go:141] libmachine: (newest-cni-347168) Found IP for machine: 192.168.61.192
	I1206 20:15:33.094891  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has current primary IP address 192.168.61.192 and MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:33.094903  120996 main.go:141] libmachine: (newest-cni-347168) Reserving static IP address...
	I1206 20:15:33.095318  120996 main.go:141] libmachine: (newest-cni-347168) DBG | unable to find host DHCP lease matching {name: "newest-cni-347168", mac: "52:54:00:11:9b:a6", ip: "192.168.61.192"} in network mk-newest-cni-347168
	I1206 20:15:33.176566  120996 main.go:141] libmachine: (newest-cni-347168) DBG | Getting to WaitForSSH function...
	I1206 20:15:33.176603  120996 main.go:141] libmachine: (newest-cni-347168) Reserved static IP address: 192.168.61.192
	I1206 20:15:33.176620  120996 main.go:141] libmachine: (newest-cni-347168) Waiting for SSH to be available...
	I1206 20:15:33.179571  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:33.180101  120996 main.go:141] libmachine: (newest-cni-347168) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:9b:a6", ip: ""} in network mk-newest-cni-347168: {Iface:virbr4 ExpiryTime:2023-12-06 21:15:26 +0000 UTC Type:0 Mac:52:54:00:11:9b:a6 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:minikube Clientid:01:52:54:00:11:9b:a6}
	I1206 20:15:33.180146  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined IP address 192.168.61.192 and MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:33.180242  120996 main.go:141] libmachine: (newest-cni-347168) DBG | Using SSH client type: external
	I1206 20:15:33.180273  120996 main.go:141] libmachine: (newest-cni-347168) DBG | Using SSH private key: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/newest-cni-347168/id_rsa (-rw-------)
	I1206 20:15:33.180316  120996 main.go:141] libmachine: (newest-cni-347168) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.192 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17740-63652/.minikube/machines/newest-cni-347168/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1206 20:15:33.180335  120996 main.go:141] libmachine: (newest-cni-347168) DBG | About to run SSH command:
	I1206 20:15:33.180354  120996 main.go:141] libmachine: (newest-cni-347168) DBG | exit 0
	I1206 20:15:33.269146  120996 main.go:141] libmachine: (newest-cni-347168) DBG | SSH cmd err, output: <nil>: 
	I1206 20:15:33.269444  120996 main.go:141] libmachine: (newest-cni-347168) KVM machine creation complete!
	I1206 20:15:33.269829  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetConfigRaw
	I1206 20:15:33.270405  120996 main.go:141] libmachine: (newest-cni-347168) Calling .DriverName
	I1206 20:15:33.270633  120996 main.go:141] libmachine: (newest-cni-347168) Calling .DriverName
	I1206 20:15:33.270822  120996 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1206 20:15:33.270835  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetState
	I1206 20:15:33.272293  120996 main.go:141] libmachine: Detecting operating system of created instance...
	I1206 20:15:33.272342  120996 main.go:141] libmachine: Waiting for SSH to be available...
	I1206 20:15:33.272355  120996 main.go:141] libmachine: Getting to WaitForSSH function...
	I1206 20:15:33.272365  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHHostname
	I1206 20:15:33.275189  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:33.275639  120996 main.go:141] libmachine: (newest-cni-347168) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:9b:a6", ip: ""} in network mk-newest-cni-347168: {Iface:virbr4 ExpiryTime:2023-12-06 21:15:26 +0000 UTC Type:0 Mac:52:54:00:11:9b:a6 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-347168 Clientid:01:52:54:00:11:9b:a6}
	I1206 20:15:33.275661  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined IP address 192.168.61.192 and MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:33.275861  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHPort
	I1206 20:15:33.276078  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHKeyPath
	I1206 20:15:33.276274  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHKeyPath
	I1206 20:15:33.276436  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHUsername
	I1206 20:15:33.276619  120996 main.go:141] libmachine: Using SSH client type: native
	I1206 20:15:33.277063  120996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.192 22 <nil> <nil>}
	I1206 20:15:33.277084  120996 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1206 20:15:33.396625  120996 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1206 20:15:33.396664  120996 main.go:141] libmachine: Detecting the provisioner...
	I1206 20:15:33.396673  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHHostname
	I1206 20:15:33.399852  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:33.400190  120996 main.go:141] libmachine: (newest-cni-347168) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:9b:a6", ip: ""} in network mk-newest-cni-347168: {Iface:virbr4 ExpiryTime:2023-12-06 21:15:26 +0000 UTC Type:0 Mac:52:54:00:11:9b:a6 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-347168 Clientid:01:52:54:00:11:9b:a6}
	I1206 20:15:33.400224  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined IP address 192.168.61.192 and MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:33.400361  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHPort
	I1206 20:15:33.400593  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHKeyPath
	I1206 20:15:33.400784  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHKeyPath
	I1206 20:15:33.400971  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHUsername
	I1206 20:15:33.401166  120996 main.go:141] libmachine: Using SSH client type: native
	I1206 20:15:33.401629  120996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.192 22 <nil> <nil>}
	I1206 20:15:33.401646  120996 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1206 20:15:33.527309  120996 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gf888a99-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1206 20:15:33.527418  120996 main.go:141] libmachine: found compatible host: buildroot
	I1206 20:15:33.527427  120996 main.go:141] libmachine: Provisioning with buildroot...
	I1206 20:15:33.527434  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetMachineName
	I1206 20:15:33.527777  120996 buildroot.go:166] provisioning hostname "newest-cni-347168"
	I1206 20:15:33.527818  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetMachineName
	I1206 20:15:33.528027  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHHostname
	I1206 20:15:33.530841  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:33.531228  120996 main.go:141] libmachine: (newest-cni-347168) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:9b:a6", ip: ""} in network mk-newest-cni-347168: {Iface:virbr4 ExpiryTime:2023-12-06 21:15:26 +0000 UTC Type:0 Mac:52:54:00:11:9b:a6 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-347168 Clientid:01:52:54:00:11:9b:a6}
	I1206 20:15:33.531280  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined IP address 192.168.61.192 and MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:33.531377  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHPort
	I1206 20:15:33.531609  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHKeyPath
	I1206 20:15:33.531813  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHKeyPath
	I1206 20:15:33.532007  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHUsername
	I1206 20:15:33.532266  120996 main.go:141] libmachine: Using SSH client type: native
	I1206 20:15:33.532677  120996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.192 22 <nil> <nil>}
	I1206 20:15:33.532700  120996 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-347168 && echo "newest-cni-347168" | sudo tee /etc/hostname
	I1206 20:15:33.662449  120996 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-347168
	
	I1206 20:15:33.662483  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHHostname
	I1206 20:15:33.665436  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:33.665800  120996 main.go:141] libmachine: (newest-cni-347168) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:9b:a6", ip: ""} in network mk-newest-cni-347168: {Iface:virbr4 ExpiryTime:2023-12-06 21:15:26 +0000 UTC Type:0 Mac:52:54:00:11:9b:a6 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-347168 Clientid:01:52:54:00:11:9b:a6}
	I1206 20:15:33.665846  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined IP address 192.168.61.192 and MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:33.665981  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHPort
	I1206 20:15:33.666218  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHKeyPath
	I1206 20:15:33.666403  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHKeyPath
	I1206 20:15:33.666527  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHUsername
	I1206 20:15:33.666696  120996 main.go:141] libmachine: Using SSH client type: native
	I1206 20:15:33.667172  120996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.192 22 <nil> <nil>}
	I1206 20:15:33.667192  120996 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-347168' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-347168/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-347168' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 20:15:33.796492  120996 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1206 20:15:33.796531  120996 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17740-63652/.minikube CaCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17740-63652/.minikube}
	I1206 20:15:33.796567  120996 buildroot.go:174] setting up certificates
	I1206 20:15:33.796589  120996 provision.go:83] configureAuth start
	I1206 20:15:33.796604  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetMachineName
	I1206 20:15:33.796964  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetIP
	I1206 20:15:33.799993  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:33.800370  120996 main.go:141] libmachine: (newest-cni-347168) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:9b:a6", ip: ""} in network mk-newest-cni-347168: {Iface:virbr4 ExpiryTime:2023-12-06 21:15:26 +0000 UTC Type:0 Mac:52:54:00:11:9b:a6 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-347168 Clientid:01:52:54:00:11:9b:a6}
	I1206 20:15:33.800403  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined IP address 192.168.61.192 and MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:33.800521  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHHostname
	I1206 20:15:33.802989  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:33.803300  120996 main.go:141] libmachine: (newest-cni-347168) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:9b:a6", ip: ""} in network mk-newest-cni-347168: {Iface:virbr4 ExpiryTime:2023-12-06 21:15:26 +0000 UTC Type:0 Mac:52:54:00:11:9b:a6 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-347168 Clientid:01:52:54:00:11:9b:a6}
	I1206 20:15:33.803341  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined IP address 192.168.61.192 and MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:33.803478  120996 provision.go:138] copyHostCerts
	I1206 20:15:33.803571  120996 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem, removing ...
	I1206 20:15:33.803603  120996 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem
	I1206 20:15:33.803687  120996 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem (1082 bytes)
	I1206 20:15:33.803858  120996 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem, removing ...
	I1206 20:15:33.803869  120996 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem
	I1206 20:15:33.803910  120996 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem (1123 bytes)
	I1206 20:15:33.804042  120996 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem, removing ...
	I1206 20:15:33.804091  120996 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem
	I1206 20:15:33.804141  120996 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem (1679 bytes)
	I1206 20:15:33.804214  120996 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem org=jenkins.newest-cni-347168 san=[192.168.61.192 192.168.61.192 localhost 127.0.0.1 minikube newest-cni-347168]
	I1206 20:15:33.994563  120996 provision.go:172] copyRemoteCerts
	I1206 20:15:33.994644  120996 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 20:15:33.994682  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHHostname
	I1206 20:15:33.997818  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:33.998118  120996 main.go:141] libmachine: (newest-cni-347168) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:9b:a6", ip: ""} in network mk-newest-cni-347168: {Iface:virbr4 ExpiryTime:2023-12-06 21:15:26 +0000 UTC Type:0 Mac:52:54:00:11:9b:a6 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-347168 Clientid:01:52:54:00:11:9b:a6}
	I1206 20:15:33.998153  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined IP address 192.168.61.192 and MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:33.998411  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHPort
	I1206 20:15:33.998612  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHKeyPath
	I1206 20:15:33.998774  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHUsername
	I1206 20:15:33.998935  120996 sshutil.go:53] new ssh client: &{IP:192.168.61.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/newest-cni-347168/id_rsa Username:docker}
	I1206 20:15:34.091615  120996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 20:15:34.118438  120996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1206 20:15:34.145084  120996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 20:15:34.170898  120996 provision.go:86] duration metric: configureAuth took 374.286079ms
	I1206 20:15:34.170929  120996 buildroot.go:189] setting minikube options for container-runtime
	I1206 20:15:34.171164  120996 config.go:182] Loaded profile config "newest-cni-347168": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.1
	I1206 20:15:34.171268  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHHostname
	I1206 20:15:34.174189  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:34.174600  120996 main.go:141] libmachine: (newest-cni-347168) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:9b:a6", ip: ""} in network mk-newest-cni-347168: {Iface:virbr4 ExpiryTime:2023-12-06 21:15:26 +0000 UTC Type:0 Mac:52:54:00:11:9b:a6 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-347168 Clientid:01:52:54:00:11:9b:a6}
	I1206 20:15:34.174628  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined IP address 192.168.61.192 and MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:34.174785  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHPort
	I1206 20:15:34.174985  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHKeyPath
	I1206 20:15:34.175141  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHKeyPath
	I1206 20:15:34.175338  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHUsername
	I1206 20:15:34.175523  120996 main.go:141] libmachine: Using SSH client type: native
	I1206 20:15:34.175843  120996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.192 22 <nil> <nil>}
	I1206 20:15:34.175862  120996 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 20:15:34.505869  120996 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 20:15:34.505897  120996 main.go:141] libmachine: Checking connection to Docker...
	I1206 20:15:34.505925  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetURL
	I1206 20:15:34.507244  120996 main.go:141] libmachine: (newest-cni-347168) DBG | Using libvirt version 6000000
	I1206 20:15:34.509869  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:34.510193  120996 main.go:141] libmachine: (newest-cni-347168) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:9b:a6", ip: ""} in network mk-newest-cni-347168: {Iface:virbr4 ExpiryTime:2023-12-06 21:15:26 +0000 UTC Type:0 Mac:52:54:00:11:9b:a6 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-347168 Clientid:01:52:54:00:11:9b:a6}
	I1206 20:15:34.510223  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined IP address 192.168.61.192 and MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:34.510381  120996 main.go:141] libmachine: Docker is up and running!
	I1206 20:15:34.510395  120996 main.go:141] libmachine: Reticulating splines...
	I1206 20:15:34.510402  120996 client.go:171] LocalClient.Create took 24.605627718s
	I1206 20:15:34.510422  120996 start.go:167] duration metric: libmachine.API.Create for "newest-cni-347168" took 24.605732185s
	I1206 20:15:34.510431  120996 start.go:300] post-start starting for "newest-cni-347168" (driver="kvm2")
	I1206 20:15:34.510441  120996 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 20:15:34.510457  120996 main.go:141] libmachine: (newest-cni-347168) Calling .DriverName
	I1206 20:15:34.510730  120996 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 20:15:34.510761  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHHostname
	I1206 20:15:34.512910  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:34.513206  120996 main.go:141] libmachine: (newest-cni-347168) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:9b:a6", ip: ""} in network mk-newest-cni-347168: {Iface:virbr4 ExpiryTime:2023-12-06 21:15:26 +0000 UTC Type:0 Mac:52:54:00:11:9b:a6 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-347168 Clientid:01:52:54:00:11:9b:a6}
	I1206 20:15:34.513248  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined IP address 192.168.61.192 and MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:34.513417  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHPort
	I1206 20:15:34.513618  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHKeyPath
	I1206 20:15:34.513799  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHUsername
	I1206 20:15:34.513964  120996 sshutil.go:53] new ssh client: &{IP:192.168.61.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/newest-cni-347168/id_rsa Username:docker}
	I1206 20:15:34.602772  120996 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 20:15:34.607707  120996 info.go:137] Remote host: Buildroot 2021.02.12
	I1206 20:15:34.607747  120996 filesync.go:126] Scanning /home/jenkins/minikube-integration/17740-63652/.minikube/addons for local assets ...
	I1206 20:15:34.607827  120996 filesync.go:126] Scanning /home/jenkins/minikube-integration/17740-63652/.minikube/files for local assets ...
	I1206 20:15:34.607921  120996 filesync.go:149] local asset: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem -> 708342.pem in /etc/ssl/certs
	I1206 20:15:34.608034  120996 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 20:15:34.617266  120996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem --> /etc/ssl/certs/708342.pem (1708 bytes)
	I1206 20:15:34.642598  120996 start.go:303] post-start completed in 132.153683ms
	I1206 20:15:34.642655  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetConfigRaw
	I1206 20:15:34.643248  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetIP
	I1206 20:15:34.645908  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:34.646216  120996 main.go:141] libmachine: (newest-cni-347168) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:9b:a6", ip: ""} in network mk-newest-cni-347168: {Iface:virbr4 ExpiryTime:2023-12-06 21:15:26 +0000 UTC Type:0 Mac:52:54:00:11:9b:a6 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-347168 Clientid:01:52:54:00:11:9b:a6}
	I1206 20:15:34.646250  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined IP address 192.168.61.192 and MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:34.646495  120996 profile.go:148] Saving config to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/config.json ...
	I1206 20:15:34.646667  120996 start.go:128] duration metric: createHost completed in 24.7616076s
	I1206 20:15:34.646690  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHHostname
	I1206 20:15:34.649005  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:34.649396  120996 main.go:141] libmachine: (newest-cni-347168) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:9b:a6", ip: ""} in network mk-newest-cni-347168: {Iface:virbr4 ExpiryTime:2023-12-06 21:15:26 +0000 UTC Type:0 Mac:52:54:00:11:9b:a6 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-347168 Clientid:01:52:54:00:11:9b:a6}
	I1206 20:15:34.649427  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined IP address 192.168.61.192 and MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:34.649582  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHPort
	I1206 20:15:34.649793  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHKeyPath
	I1206 20:15:34.649962  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHKeyPath
	I1206 20:15:34.650115  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHUsername
	I1206 20:15:34.650296  120996 main.go:141] libmachine: Using SSH client type: native
	I1206 20:15:34.650651  120996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.192 22 <nil> <nil>}
	I1206 20:15:34.650665  120996 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1206 20:15:34.770239  120996 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701893734.748854790
	
	I1206 20:15:34.770269  120996 fix.go:206] guest clock: 1701893734.748854790
	I1206 20:15:34.770279  120996 fix.go:219] Guest: 2023-12-06 20:15:34.74885479 +0000 UTC Remote: 2023-12-06 20:15:34.646679476 +0000 UTC m=+24.893998228 (delta=102.175314ms)
	I1206 20:15:34.770307  120996 fix.go:190] guest clock delta is within tolerance: 102.175314ms
	I1206 20:15:34.770313  120996 start.go:83] releasing machines lock for "newest-cni-347168", held for 24.885371157s
	I1206 20:15:34.770338  120996 main.go:141] libmachine: (newest-cni-347168) Calling .DriverName
	I1206 20:15:34.770693  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetIP
	I1206 20:15:34.773617  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:34.774159  120996 main.go:141] libmachine: (newest-cni-347168) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:9b:a6", ip: ""} in network mk-newest-cni-347168: {Iface:virbr4 ExpiryTime:2023-12-06 21:15:26 +0000 UTC Type:0 Mac:52:54:00:11:9b:a6 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-347168 Clientid:01:52:54:00:11:9b:a6}
	I1206 20:15:34.774191  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined IP address 192.168.61.192 and MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:34.774423  120996 main.go:141] libmachine: (newest-cni-347168) Calling .DriverName
	I1206 20:15:34.775037  120996 main.go:141] libmachine: (newest-cni-347168) Calling .DriverName
	I1206 20:15:34.775241  120996 main.go:141] libmachine: (newest-cni-347168) Calling .DriverName
	I1206 20:15:34.775404  120996 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 20:15:34.775472  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHHostname
	I1206 20:15:34.775508  120996 ssh_runner.go:195] Run: cat /version.json
	I1206 20:15:34.775536  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHHostname
	I1206 20:15:34.778593  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:34.778852  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:34.779035  120996 main.go:141] libmachine: (newest-cni-347168) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:9b:a6", ip: ""} in network mk-newest-cni-347168: {Iface:virbr4 ExpiryTime:2023-12-06 21:15:26 +0000 UTC Type:0 Mac:52:54:00:11:9b:a6 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-347168 Clientid:01:52:54:00:11:9b:a6}
	I1206 20:15:34.779083  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined IP address 192.168.61.192 and MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:34.779187  120996 main.go:141] libmachine: (newest-cni-347168) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:9b:a6", ip: ""} in network mk-newest-cni-347168: {Iface:virbr4 ExpiryTime:2023-12-06 21:15:26 +0000 UTC Type:0 Mac:52:54:00:11:9b:a6 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-347168 Clientid:01:52:54:00:11:9b:a6}
	I1206 20:15:34.779216  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined IP address 192.168.61.192 and MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:34.779351  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHPort
	I1206 20:15:34.779479  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHPort
	I1206 20:15:34.779560  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHKeyPath
	I1206 20:15:34.779632  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHKeyPath
	I1206 20:15:34.779712  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHUsername
	I1206 20:15:34.779772  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHUsername
	I1206 20:15:34.779846  120996 sshutil.go:53] new ssh client: &{IP:192.168.61.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/newest-cni-347168/id_rsa Username:docker}
	I1206 20:15:34.779906  120996 sshutil.go:53] new ssh client: &{IP:192.168.61.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/newest-cni-347168/id_rsa Username:docker}
	I1206 20:15:34.863386  120996 ssh_runner.go:195] Run: systemctl --version
	I1206 20:15:34.895207  120996 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 20:15:35.057492  120996 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 20:15:35.064260  120996 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 20:15:35.064332  120996 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 20:15:35.080857  120996 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1206 20:15:35.080883  120996 start.go:475] detecting cgroup driver to use...
	I1206 20:15:35.080977  120996 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 20:15:35.094647  120996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 20:15:35.108721  120996 docker.go:203] disabling cri-docker service (if available) ...
	I1206 20:15:35.108805  120996 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 20:15:35.122547  120996 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 20:15:35.137628  120996 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 20:15:35.249519  120996 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 20:15:35.372591  120996 docker.go:219] disabling docker service ...
	I1206 20:15:35.372650  120996 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 20:15:35.386595  120996 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 20:15:35.399053  120996 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 20:15:35.517013  120996 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 20:15:35.630728  120996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 20:15:35.642975  120996 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 20:15:35.661406  120996 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1206 20:15:35.661494  120996 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 20:15:35.670952  120996 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 20:15:35.671028  120996 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 20:15:35.680444  120996 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 20:15:35.690123  120996 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 20:15:35.699431  120996 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 20:15:35.709773  120996 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 20:15:35.718080  120996 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1206 20:15:35.718160  120996 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1206 20:15:35.729953  120996 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 20:15:35.739791  120996 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 20:15:35.856949  120996 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 20:15:36.044563  120996 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 20:15:36.044646  120996 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 20:15:36.050663  120996 start.go:543] Will wait 60s for crictl version
	I1206 20:15:36.050727  120996 ssh_runner.go:195] Run: which crictl
	I1206 20:15:36.055266  120996 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1206 20:15:36.095529  120996 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1206 20:15:36.095602  120996 ssh_runner.go:195] Run: crio --version
	I1206 20:15:36.141633  120996 ssh_runner.go:195] Run: crio --version
	I1206 20:15:36.192165  120996 out.go:177] * Preparing Kubernetes v1.29.0-rc.1 on CRI-O 1.24.1 ...
	I1206 20:15:36.193762  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetIP
	I1206 20:15:36.197069  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:36.197489  120996 main.go:141] libmachine: (newest-cni-347168) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:9b:a6", ip: ""} in network mk-newest-cni-347168: {Iface:virbr4 ExpiryTime:2023-12-06 21:15:26 +0000 UTC Type:0 Mac:52:54:00:11:9b:a6 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-347168 Clientid:01:52:54:00:11:9b:a6}
	I1206 20:15:36.197518  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined IP address 192.168.61.192 and MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:36.197830  120996 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1206 20:15:36.202239  120996 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 20:15:36.215884  120996 localpath.go:92] copying /home/jenkins/minikube-integration/17740-63652/.minikube/client.crt -> /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/client.crt
	I1206 20:15:36.216041  120996 localpath.go:117] copying /home/jenkins/minikube-integration/17740-63652/.minikube/client.key -> /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/client.key
	I1206 20:15:36.218392  120996 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1206 20:15:36.220048  120996 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime crio
	I1206 20:15:36.220120  120996 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 20:15:36.262585  120996 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.1". assuming images are not preloaded.
	I1206 20:15:36.262652  120996 ssh_runner.go:195] Run: which lz4
	I1206 20:15:36.267061  120996 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1206 20:15:36.271359  120996 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1206 20:15:36.271388  120996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (401677649 bytes)
	I1206 20:15:37.981124  120996 crio.go:444] Took 1.714117 seconds to copy over tarball
	I1206 20:15:37.981223  120996 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1206 20:15:40.790111  120996 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.808826705s)
	I1206 20:15:40.790157  120996 crio.go:451] Took 2.809002 seconds to extract the tarball
	I1206 20:15:40.790167  120996 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1206 20:15:40.828966  120996 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 20:15:40.916896  120996 crio.go:496] all images are preloaded for cri-o runtime.
	I1206 20:15:40.916921  120996 cache_images.go:84] Images are preloaded, skipping loading
	I1206 20:15:40.916985  120996 ssh_runner.go:195] Run: crio config
	I1206 20:15:40.998264  120996 cni.go:84] Creating CNI manager for ""
	I1206 20:15:40.998288  120996 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 20:15:40.998307  120996 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I1206 20:15:40.998328  120996 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.61.192 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-347168 NodeName:newest-cni-347168 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.192"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureAr
gs:map[] NodeIP:192.168.61.192 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 20:15:40.998468  120996 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.192
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-347168"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.192
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.192"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 20:15:40.998549  120996 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-347168 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.192
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.1 ClusterName:newest-cni-347168 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1206 20:15:40.998608  120996 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.1
	I1206 20:15:41.008416  120996 binaries.go:44] Found k8s binaries, skipping transfer
	I1206 20:15:41.008501  120996 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 20:15:41.017748  120996 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (419 bytes)
	I1206 20:15:41.035185  120996 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1206 20:15:41.052224  120996 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2233 bytes)
	I1206 20:15:41.069299  120996 ssh_runner.go:195] Run: grep 192.168.61.192	control-plane.minikube.internal$ /etc/hosts
	I1206 20:15:41.073265  120996 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.192	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 20:15:41.085857  120996 certs.go:56] Setting up /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168 for IP: 192.168.61.192
	I1206 20:15:41.085896  120996 certs.go:190] acquiring lock for shared ca certs: {Name:mkf8fbf7b590617ef4dc6c3a4acb742ae26f89ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 20:15:41.086087  120996 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.key
	I1206 20:15:41.086151  120996 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.key
	I1206 20:15:41.086325  120996 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/client.key
	I1206 20:15:41.086357  120996 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/apiserver.key.8756bd21
	I1206 20:15:41.086373  120996 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/apiserver.crt.8756bd21 with IP's: [192.168.61.192 10.96.0.1 127.0.0.1 10.0.0.1]
	I1206 20:15:41.197437  120996 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/apiserver.crt.8756bd21 ...
	I1206 20:15:41.197470  120996 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/apiserver.crt.8756bd21: {Name:mkbbadf29b0d59f332c8ce9ff67c67d3ca12aa26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 20:15:41.197661  120996 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/apiserver.key.8756bd21 ...
	I1206 20:15:41.197682  120996 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/apiserver.key.8756bd21: {Name:mk4c3c03bcb2230fc8cb74c47ba0e05d48da0ed7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 20:15:41.197774  120996 certs.go:337] copying /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/apiserver.crt.8756bd21 -> /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/apiserver.crt
	I1206 20:15:41.197880  120996 certs.go:341] copying /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/apiserver.key.8756bd21 -> /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/apiserver.key
	I1206 20:15:41.197949  120996 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/proxy-client.key
	I1206 20:15:41.197971  120996 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/proxy-client.crt with IP's: []
	I1206 20:15:41.598679  120996 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/proxy-client.crt ...
	I1206 20:15:41.598710  120996 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/proxy-client.crt: {Name:mkb77a95ad0addf9acd5c9bf01b0ffc8de6e0242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 20:15:41.598874  120996 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/proxy-client.key ...
	I1206 20:15:41.598889  120996 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/proxy-client.key: {Name:mkc732ed250bbf0840017180e73efc203eba166f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 20:15:41.599055  120996 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834.pem (1338 bytes)
	W1206 20:15:41.599093  120996 certs.go:433] ignoring /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834_empty.pem, impossibly tiny 0 bytes
	I1206 20:15:41.599103  120996 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 20:15:41.599125  120996 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem (1082 bytes)
	I1206 20:15:41.599168  120996 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem (1123 bytes)
	I1206 20:15:41.599195  120996 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem (1679 bytes)
	I1206 20:15:41.599232  120996 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem (1708 bytes)
	I1206 20:15:41.599883  120996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1206 20:15:41.624812  120996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 20:15:41.650187  120996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 20:15:41.674485  120996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 20:15:41.698270  120996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 20:15:41.721020  120996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 20:15:41.745140  120996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 20:15:41.770557  120996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 20:15:41.795231  120996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834.pem --> /usr/share/ca-certificates/70834.pem (1338 bytes)
	I1206 20:15:41.821360  120996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem --> /usr/share/ca-certificates/708342.pem (1708 bytes)
	I1206 20:15:41.845544  120996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 20:15:41.869335  120996 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 20:15:41.888087  120996 ssh_runner.go:195] Run: openssl version
	I1206 20:15:41.894632  120996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/70834.pem && ln -fs /usr/share/ca-certificates/70834.pem /etc/ssl/certs/70834.pem"
	I1206 20:15:41.907245  120996 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/70834.pem
	I1206 20:15:41.912955  120996 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  6 18:50 /usr/share/ca-certificates/70834.pem
	I1206 20:15:41.913025  120996 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/70834.pem
	I1206 20:15:41.919221  120996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/70834.pem /etc/ssl/certs/51391683.0"
	I1206 20:15:41.930660  120996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/708342.pem && ln -fs /usr/share/ca-certificates/708342.pem /etc/ssl/certs/708342.pem"
	I1206 20:15:41.942151  120996 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/708342.pem
	I1206 20:15:41.946967  120996 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  6 18:50 /usr/share/ca-certificates/708342.pem
	I1206 20:15:41.947034  120996 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/708342.pem
	I1206 20:15:41.952949  120996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/708342.pem /etc/ssl/certs/3ec20f2e.0"
	I1206 20:15:41.963528  120996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1206 20:15:41.973984  120996 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 20:15:41.978597  120996 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  6 18:41 /usr/share/ca-certificates/minikubeCA.pem
	I1206 20:15:41.978663  120996 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 20:15:41.984469  120996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1206 20:15:41.995387  120996 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1206 20:15:41.999768  120996 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1206 20:15:41.999815  120996 kubeadm.go:404] StartCluster: {Name:newest-cni-347168 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.1 ClusterName:newest-cni-347168 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.192 Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jen
kins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 20:15:41.999880  120996 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 20:15:41.999947  120996 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 20:15:42.047446  120996 cri.go:89] found id: ""
	I1206 20:15:42.047529  120996 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 20:15:42.057915  120996 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 20:15:42.068059  120996 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 20:15:42.080208  120996 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 20:15:42.080260  120996 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1206 20:15:42.214896  120996 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.1
	I1206 20:15:42.214985  120996 kubeadm.go:322] [preflight] Running pre-flight checks
	I1206 20:15:42.492727  120996 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 20:15:42.492883  120996 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 20:15:42.493047  120996 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1206 20:15:42.746186  120996 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 20:15:42.761997  120996 out.go:204]   - Generating certificates and keys ...
	I1206 20:15:42.762133  120996 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1206 20:15:42.762238  120996 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1206 20:15:42.946642  120996 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 20:15:43.233781  120996 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1206 20:15:43.428093  120996 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1206 20:15:43.572927  120996 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1206 20:15:43.675521  120996 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1206 20:15:43.675955  120996 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-347168] and IPs [192.168.61.192 127.0.0.1 ::1]
	I1206 20:15:44.078655  120996 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1206 20:15:44.078879  120996 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-347168] and IPs [192.168.61.192 127.0.0.1 ::1]
	I1206 20:15:44.303828  120996 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 20:15:44.358076  120996 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 20:15:44.518551  120996 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1206 20:15:44.518878  120996 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 20:15:44.689318  120996 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 20:15:44.979567  120996 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 20:15:45.074293  120996 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 20:15:45.291683  120996 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 20:15:45.481809  120996 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 20:15:45.482648  120996 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 20:15:45.486356  120996 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 20:15:45.488443  120996 out.go:204]   - Booting up control plane ...
	I1206 20:15:45.488566  120996 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 20:15:45.488678  120996 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 20:15:45.488756  120996 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 20:15:45.508193  120996 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 20:15:45.508987  120996 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 20:15:45.509071  120996 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1206 20:15:45.651715  120996 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1206 20:15:53.654790  120996 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.005357 seconds
	I1206 20:15:53.672507  120996 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 20:15:53.686605  120996 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 20:15:54.227394  120996 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 20:15:54.227619  120996 kubeadm.go:322] [mark-control-plane] Marking the node newest-cni-347168 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1206 20:15:54.743961  120996 kubeadm.go:322] [bootstrap-token] Using token: zzfjhv.rhhjxylbr6v9obzo
	I1206 20:15:54.745695  120996 out.go:204]   - Configuring RBAC rules ...
	I1206 20:15:54.745846  120996 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 20:15:54.757514  120996 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1206 20:15:54.767939  120996 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 20:15:54.774859  120996 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 20:15:54.780189  120996 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 20:15:54.790194  120996 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 20:15:54.802105  120996 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1206 20:15:55.063098  120996 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1206 20:15:55.170001  120996 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1206 20:15:55.174699  120996 kubeadm.go:322] 
	I1206 20:15:55.174776  120996 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1206 20:15:55.174793  120996 kubeadm.go:322] 
	I1206 20:15:55.174869  120996 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1206 20:15:55.174880  120996 kubeadm.go:322] 
	I1206 20:15:55.174915  120996 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1206 20:15:55.174990  120996 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 20:15:55.175102  120996 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 20:15:55.175127  120996 kubeadm.go:322] 
	I1206 20:15:55.175224  120996 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1206 20:15:55.175237  120996 kubeadm.go:322] 
	I1206 20:15:55.175309  120996 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1206 20:15:55.175319  120996 kubeadm.go:322] 
	I1206 20:15:55.175388  120996 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1206 20:15:55.175496  120996 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 20:15:55.175614  120996 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 20:15:55.175625  120996 kubeadm.go:322] 
	I1206 20:15:55.175749  120996 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1206 20:15:55.175871  120996 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1206 20:15:55.175880  120996 kubeadm.go:322] 
	I1206 20:15:55.176008  120996 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token zzfjhv.rhhjxylbr6v9obzo \
	I1206 20:15:55.176148  120996 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3173d0205a58c67077c3594ee458dc14bc41fcece32682cbfd9ea1126e12b817 \
	I1206 20:15:55.176178  120996 kubeadm.go:322] 	--control-plane 
	I1206 20:15:55.176188  120996 kubeadm.go:322] 
	I1206 20:15:55.176289  120996 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1206 20:15:55.176301  120996 kubeadm.go:322] 
	I1206 20:15:55.176396  120996 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token zzfjhv.rhhjxylbr6v9obzo \
	I1206 20:15:55.176519  120996 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3173d0205a58c67077c3594ee458dc14bc41fcece32682cbfd9ea1126e12b817 
	I1206 20:15:55.176693  120996 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 20:15:55.176720  120996 cni.go:84] Creating CNI manager for ""
	I1206 20:15:55.176734  120996 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 20:15:55.178744  120996 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1206 20:15:55.180471  120996 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1206 20:15:55.195688  120996 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1206 20:15:55.215480  120996 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 20:15:55.215551  120996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=31a3600ce72029d920a55140bbc6d0705e357503 minikube.k8s.io/name=newest-cni-347168 minikube.k8s.io/updated_at=2023_12_06T20_15_55_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:15:55.215551  120996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:15:55.558703  120996 ops.go:34] apiserver oom_adj: -16
	I1206 20:15:55.558893  120996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:15:55.654316  120996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:15:56.238770  120996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:15:56.738289  120996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:15:57.238568  120996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:15:57.738490  120996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:15:58.238942  120996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:15:58.738647  120996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:15:59.238245  120996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:15:59.739042  120996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:16:00.238719  120996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:16:00.738914  120996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:16:01.238187  120996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:16:01.738834  120996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:16:02.238212  120996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:16:02.739060  120996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:16:03.238224  120996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:16:03.738976  120996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:16:04.238152  120996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:16:04.738477  120996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:16:05.238489  120996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:16:05.738190  120996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:16:06.238207  120996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:16:06.739054  120996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:16:07.238517  120996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:16:07.738230  120996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:16:07.846909  120996 kubeadm.go:1088] duration metric: took 12.63143328s to wait for elevateKubeSystemPrivileges.
	I1206 20:16:07.846950  120996 kubeadm.go:406] StartCluster complete in 25.847137925s
	I1206 20:16:07.846977  120996 settings.go:142] acquiring lock: {Name:mkfeb988d43ca5824ac2b3af603600358ae0dd6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 20:16:07.847064  120996 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17740-63652/kubeconfig
	I1206 20:16:07.851131  120996 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-63652/kubeconfig: {Name:mkb891a2b2c86b4a1b0f4bb8fd4e51233eb9c683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 20:16:07.851458  120996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 20:16:07.851554  120996 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1206 20:16:07.851634  120996 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-347168"
	I1206 20:16:07.851655  120996 addons.go:69] Setting default-storageclass=true in profile "newest-cni-347168"
	I1206 20:16:07.851666  120996 addons.go:231] Setting addon storage-provisioner=true in "newest-cni-347168"
	I1206 20:16:07.851684  120996 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-347168"
	I1206 20:16:07.851734  120996 host.go:66] Checking if "newest-cni-347168" exists ...
	I1206 20:16:07.851756  120996 config.go:182] Loaded profile config "newest-cni-347168": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.1
	I1206 20:16:07.852180  120996 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:16:07.852203  120996 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:16:07.852214  120996 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:16:07.852240  120996 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:16:07.872723  120996 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35705
	I1206 20:16:07.872740  120996 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40981
	I1206 20:16:07.873224  120996 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:16:07.873303  120996 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:16:07.873760  120996 main.go:141] libmachine: Using API Version  1
	I1206 20:16:07.873783  120996 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:16:07.873988  120996 main.go:141] libmachine: Using API Version  1
	I1206 20:16:07.874010  120996 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:16:07.874258  120996 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:16:07.874463  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetState
	I1206 20:16:07.875233  120996 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:16:07.875809  120996 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:16:07.875837  120996 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:16:07.878376  120996 addons.go:231] Setting addon default-storageclass=true in "newest-cni-347168"
	I1206 20:16:07.878424  120996 host.go:66] Checking if "newest-cni-347168" exists ...
	I1206 20:16:07.878853  120996 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:16:07.878882  120996 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:16:07.893412  120996 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42701
	I1206 20:16:07.894052  120996 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:16:07.894187  120996 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41183
	I1206 20:16:07.894691  120996 main.go:141] libmachine: Using API Version  1
	I1206 20:16:07.894717  120996 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:16:07.894789  120996 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:16:07.895179  120996 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:16:07.895362  120996 main.go:141] libmachine: Using API Version  1
	I1206 20:16:07.895386  120996 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:16:07.895394  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetState
	I1206 20:16:07.895761  120996 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:16:07.896546  120996 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:16:07.896586  120996 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:16:07.897295  120996 main.go:141] libmachine: (newest-cni-347168) Calling .DriverName
	I1206 20:16:07.899259  120996 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 20:16:07.900607  120996 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 20:16:07.900663  120996 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 20:16:07.900687  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHHostname
	I1206 20:16:07.904194  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:16:07.904953  120996 main.go:141] libmachine: (newest-cni-347168) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:9b:a6", ip: ""} in network mk-newest-cni-347168: {Iface:virbr4 ExpiryTime:2023-12-06 21:15:26 +0000 UTC Type:0 Mac:52:54:00:11:9b:a6 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-347168 Clientid:01:52:54:00:11:9b:a6}
	I1206 20:16:07.905057  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined IP address 192.168.61.192 and MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:16:07.905425  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHPort
	I1206 20:16:07.905674  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHKeyPath
	I1206 20:16:07.905794  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHUsername
	I1206 20:16:07.905889  120996 sshutil.go:53] new ssh client: &{IP:192.168.61.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/newest-cni-347168/id_rsa Username:docker}
	I1206 20:16:07.919638  120996 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33279
	I1206 20:16:07.920089  120996 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:16:07.920600  120996 main.go:141] libmachine: Using API Version  1
	I1206 20:16:07.920631  120996 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:16:07.921042  120996 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:16:07.921192  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetState
	I1206 20:16:07.922940  120996 main.go:141] libmachine: (newest-cni-347168) Calling .DriverName
	I1206 20:16:07.923193  120996 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 20:16:07.923214  120996 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 20:16:07.923235  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHHostname
	I1206 20:16:07.926193  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:16:07.926682  120996 main.go:141] libmachine: (newest-cni-347168) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:9b:a6", ip: ""} in network mk-newest-cni-347168: {Iface:virbr4 ExpiryTime:2023-12-06 21:15:26 +0000 UTC Type:0 Mac:52:54:00:11:9b:a6 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-347168 Clientid:01:52:54:00:11:9b:a6}
	I1206 20:16:07.926718  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined IP address 192.168.61.192 and MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:16:07.926927  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHPort
	I1206 20:16:07.927207  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHKeyPath
	I1206 20:16:07.927412  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHUsername
	I1206 20:16:07.927580  120996 sshutil.go:53] new ssh client: &{IP:192.168.61.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/newest-cni-347168/id_rsa Username:docker}
	I1206 20:16:07.931087  120996 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-347168" context rescaled to 1 replicas
	I1206 20:16:07.931148  120996 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.192 Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 20:16:07.932888  120996 out.go:177] * Verifying Kubernetes components...
	I1206 20:16:07.934253  120996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 20:16:08.053584  120996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 20:16:08.055008  120996 api_server.go:52] waiting for apiserver process to appear ...
	I1206 20:16:08.055052  120996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 20:16:08.127986  120996 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 20:16:08.143775  120996 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 20:16:08.771647  120996 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1206 20:16:08.771747  120996 api_server.go:72] duration metric: took 840.566608ms to wait for apiserver process to appear ...
	I1206 20:16:08.771775  120996 api_server.go:88] waiting for apiserver healthz status ...
	I1206 20:16:08.771796  120996 api_server.go:253] Checking apiserver healthz at https://192.168.61.192:8443/healthz ...
	I1206 20:16:08.783873  120996 api_server.go:279] https://192.168.61.192:8443/healthz returned 200:
	ok
	I1206 20:16:08.790010  120996 api_server.go:141] control plane version: v1.29.0-rc.1
	I1206 20:16:08.790048  120996 api_server.go:131] duration metric: took 18.264411ms to wait for apiserver health ...
	I1206 20:16:08.790060  120996 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 20:16:08.800649  120996 system_pods.go:59] 7 kube-system pods found
	I1206 20:16:08.800688  120996 system_pods.go:61] "coredns-76f75df574-hxfmn" [10b8ef25-a5fc-46e6-9523-eecec91a2ee7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 20:16:08.800696  120996 system_pods.go:61] "coredns-76f75df574-klm8m" [78c66a8e-d0fa-4803-8dfa-738cb9a156c2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 20:16:08.800702  120996 system_pods.go:61] "etcd-newest-cni-347168" [45388753-7f55-4b66-8f23-6534f2144977] Running
	I1206 20:16:08.800707  120996 system_pods.go:61] "kube-apiserver-newest-cni-347168" [44eda642-7ea5-487d-aa75-93c96613387c] Running
	I1206 20:16:08.800712  120996 system_pods.go:61] "kube-controller-manager-newest-cni-347168" [98a6990a-da64-405c-9fc6-2532e0c5a218] Running
	I1206 20:16:08.800718  120996 system_pods.go:61] "kube-proxy-mg5gl" [fb3398e0-2a88-4740-a4f2-38f748e01b34] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1206 20:16:08.800723  120996 system_pods.go:61] "kube-scheduler-newest-cni-347168" [dc9309ae-8fae-4d6a-9052-d36fd148f9db] Running
	I1206 20:16:08.800731  120996 system_pods.go:74] duration metric: took 10.66428ms to wait for pod list to return data ...
	I1206 20:16:08.800739  120996 default_sa.go:34] waiting for default service account to be created ...
	I1206 20:16:08.803688  120996 default_sa.go:45] found service account: "default"
	I1206 20:16:08.803710  120996 default_sa.go:55] duration metric: took 2.965556ms for default service account to be created ...
	I1206 20:16:08.803719  120996 kubeadm.go:581] duration metric: took 872.545849ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I1206 20:16:08.803737  120996 node_conditions.go:102] verifying NodePressure condition ...
	I1206 20:16:08.806698  120996 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1206 20:16:08.806726  120996 node_conditions.go:123] node cpu capacity is 2
	I1206 20:16:08.806737  120996 node_conditions.go:105] duration metric: took 2.995555ms to run NodePressure ...
	I1206 20:16:08.806748  120996 start.go:228] waiting for startup goroutines ...
	I1206 20:16:08.987941  120996 main.go:141] libmachine: Making call to close driver server
	I1206 20:16:08.987971  120996 main.go:141] libmachine: (newest-cni-347168) Calling .Close
	I1206 20:16:08.987976  120996 main.go:141] libmachine: Making call to close driver server
	I1206 20:16:08.987996  120996 main.go:141] libmachine: (newest-cni-347168) Calling .Close
	I1206 20:16:08.988280  120996 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:16:08.988315  120996 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:16:08.988334  120996 main.go:141] libmachine: Making call to close driver server
	I1206 20:16:08.988343  120996 main.go:141] libmachine: (newest-cni-347168) Calling .Close
	I1206 20:16:08.988384  120996 main.go:141] libmachine: (newest-cni-347168) DBG | Closing plugin on server side
	I1206 20:16:08.988443  120996 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:16:08.988458  120996 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:16:08.988479  120996 main.go:141] libmachine: Making call to close driver server
	I1206 20:16:08.988494  120996 main.go:141] libmachine: (newest-cni-347168) Calling .Close
	I1206 20:16:08.988585  120996 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:16:08.988587  120996 main.go:141] libmachine: (newest-cni-347168) DBG | Closing plugin on server side
	I1206 20:16:08.988598  120996 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:16:08.988774  120996 main.go:141] libmachine: (newest-cni-347168) DBG | Closing plugin on server side
	I1206 20:16:08.988800  120996 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:16:08.988810  120996 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:16:09.013122  120996 main.go:141] libmachine: Making call to close driver server
	I1206 20:16:09.013150  120996 main.go:141] libmachine: (newest-cni-347168) Calling .Close
	I1206 20:16:09.013473  120996 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:16:09.013497  120996 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:16:09.016691  120996 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1206 20:16:09.018570  120996 addons.go:502] enable addons completed in 1.167019478s: enabled=[storage-provisioner default-storageclass]
	I1206 20:16:09.018619  120996 start.go:233] waiting for cluster config update ...
	I1206 20:16:09.018674  120996 start.go:242] writing updated cluster config ...
	I1206 20:16:09.018992  120996 ssh_runner.go:195] Run: rm -f paused
	I1206 20:16:09.086122  120996 start.go:600] kubectl: 1.28.4, cluster: 1.29.0-rc.1 (minor skew: 1)
	I1206 20:16:09.088334  120996 out.go:177] * Done! kubectl is now configured to use "newest-cni-347168" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-12-06 19:55:16 UTC, ends at Wed 2023-12-06 20:17:00 UTC. --
	Dec 06 20:17:00 default-k8s-diff-port-380424 crio[725]: time="2023-12-06 20:17:00.705655362Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:dd52389c740ca47db469afacc818396fa694ea83fdbe2be68cdb935608a151c0,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-xpbtp,Uid:280fb2bc-d8d8-4684-8be1-ec0ace47ef77,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701892847483611037,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-xpbtp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 280fb2bc-d8d8-4684-8be1-ec0ace47ef77,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-06T20:00:47.140812424Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6c07cabd56c24c42465e45099899d24b36090c98f56a975138ad497c56a513e6,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:e1def8b1-c6bb-48df-b2f2-3486
7a409cb7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701892847293416866,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1def8b1-c6bb-48df-b2f2-34867a409cb7,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provision
er\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-12-06T20:00:46.955230345Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:64620387bce08d831b42963f73dc797420c7eae9e8ef8b80bb047c163b1c855e,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-x6p7t,Uid:de75d299-fede-4fe1-a748-31720acc76eb,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701892846427092029,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-x6p7t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de75d299-fede-4fe1-a748-31720acc76eb,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-06T20:00:45.765622014Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7de0529ee18ead08da0f8418c465ad47a21bc3777030b903c0847bb4096b04c7,Metadata:&PodSandboxMetadata{Name:kube-proxy-khh5n,Uid:acac843d-9849-4b
da-af66-2422b319665e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701892845822207420,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-khh5n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acac843d-9849-4bda-af66-2422b319665e,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-06T20:00:43.964222623Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c1741aadbbce663c805c78d510a6fb88f97754a4368a621f144ef23a1cec3522,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-380424,Uid:8b3422bb291fb3c207445e0bd656b0c3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701892821822836137,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-380424,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 8b3422bb291fb3c207445e0bd656b0c3,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 8b3422bb291fb3c207445e0bd656b0c3,kubernetes.io/config.seen: 2023-12-06T20:00:21.306649263Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b171f1df8871ec4eda57cf566603b0316772b0b5bd70edfc1f1b4edf157bb146,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-default-k8s-diff-port-380424,Uid:3650c54206015f5f73ea260c72d54d27,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701892821815863281,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-380424,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3650c54206015f5f73ea260c72d54d27,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 3650c54206015f5f73ea260c72d54d27,kubernetes.io/config.seen: 2023-12-06T20:00:21.306648463Z,kubernetes.io/config.source: file,},
RuntimeHandler:,},&PodSandbox{Id:3309269f7ecf4bb8053c0e9db0065dceb4f52a49a2f3bceb720a9146be09149d,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-380424,Uid:6e14bbf982dabaf9ba842eeced09bf9f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701892821802588654,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-380424,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e14bbf982dabaf9ba842eeced09bf9f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.22:8444,kubernetes.io/config.hash: 6e14bbf982dabaf9ba842eeced09bf9f,kubernetes.io/config.seen: 2023-12-06T20:00:21.306647294Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:73ac3548d3b18a7d2de12f10c3fe5f31dc0728cab68014566bcc0aa6fba7c2b3,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-380424,Uid:b4f020be2b72e6574
d4b4b145d3c3d20,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701892821761997942,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-380424,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4f020be2b72e6574d4b4b145d3c3d20,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.72.22:2379,kubernetes.io/config.hash: b4f020be2b72e6574d4b4b145d3c3d20,kubernetes.io/config.seen: 2023-12-06T20:00:21.306643117Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=4b3c6a46-a513-4495-a135-ad2885e4e3ec name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 06 20:17:00 default-k8s-diff-port-380424 crio[725]: time="2023-12-06 20:17:00.706972648Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0437b263-a086-4bf1-8f27-22bc5a2306b9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:17:00 default-k8s-diff-port-380424 crio[725]: time="2023-12-06 20:17:00.707053712Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0437b263-a086-4bf1-8f27-22bc5a2306b9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:17:00 default-k8s-diff-port-380424 crio[725]: time="2023-12-06 20:17:00.707285197Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c9aadff3bd822709562dbf1a0ded031ba2c2ea54884c53d782071174d0738260,PodSandboxId:6c07cabd56c24c42465e45099899d24b36090c98f56a975138ad497c56a513e6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701892848807274583,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1def8b1-c6bb-48df-b2f2-34867a409cb7,},Annotations:map[string]string{io.kubernetes.container.hash: 11efe436,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdab86736d83b0ca2134e4add0ac6f9c66685fe48dfb3b07b5c77fed2f1448b0,PodSandboxId:7de0529ee18ead08da0f8418c465ad47a21bc3777030b903c0847bb4096b04c7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701892848479892462,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-khh5n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acac843d-9849-4bda-af66-2422b319665e,},Annotations:map[string]string{io.kubernetes.container.hash: 65741ac7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32578a0cf908fc0cb5caaac759149a35b7020bcc4fd563cc8be8358bbe3c5d4e,PodSandboxId:64620387bce08d831b42963f73dc797420c7eae9e8ef8b80bb047c163b1c855e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701892847807068316,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-x6p7t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de75d299-fede-4fe1-a748-31720acc76eb,},Annotations:map[string]string{io.kubernetes.container.hash: b38db4a4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae6ebd5fabd5ae8a42c7e81c50097899af6c1e0c0d32038ed24223f5dfd13f94,PodSandboxId:73ac3548d3b18a7d2de12f10c3fe5f31dc0728cab68014566bcc0aa6fba7c2b3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701892822761370901,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-380424,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4f020be2b72e6574
d4b4b145d3c3d20,},Annotations:map[string]string{io.kubernetes.container.hash: 9e075002,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23de0ede546b1ebf1e05556778d0bd15c476ba99f41924c568b5d9b445b97ffe,PodSandboxId:c1741aadbbce663c805c78d510a6fb88f97754a4368a621f144ef23a1cec3522,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701892822676317629,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-380424,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b3422bb291fb3c20
7445e0bd656b0c3,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45732ee62285b86989592cc56e3154151c04101ed8fe9b617ec01b515d05332f,PodSandboxId:b171f1df8871ec4eda57cf566603b0316772b0b5bd70edfc1f1b4edf157bb146,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701892822589934806,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-380424,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 3650c54206015f5f73ea260c72d54d27,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1559f7cdd0f70169ed3fd8c988f56860f427f6ecfeb7975274ee4bc105624b1,PodSandboxId:3309269f7ecf4bb8053c0e9db0065dceb4f52a49a2f3bceb720a9146be09149d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701892822375986892,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-380424,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 6e14bbf982dabaf9ba842eeced09bf9f,},Annotations:map[string]string{io.kubernetes.container.hash: a27e8ed2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0437b263-a086-4bf1-8f27-22bc5a2306b9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:17:00 default-k8s-diff-port-380424 crio[725]: time="2023-12-06 20:17:00.724146841Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=e246f9fb-0675-4fb3-af1d-c013cb54ba52 name=/runtime.v1.RuntimeService/Version
	Dec 06 20:17:00 default-k8s-diff-port-380424 crio[725]: time="2023-12-06 20:17:00.724235484Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=e246f9fb-0675-4fb3-af1d-c013cb54ba52 name=/runtime.v1.RuntimeService/Version
	Dec 06 20:17:00 default-k8s-diff-port-380424 crio[725]: time="2023-12-06 20:17:00.725409357Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=919d681a-e041-48a3-acf1-c4f1583d6b59 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 20:17:00 default-k8s-diff-port-380424 crio[725]: time="2023-12-06 20:17:00.725974744Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701893820725960982,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=919d681a-e041-48a3-acf1-c4f1583d6b59 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 20:17:00 default-k8s-diff-port-380424 crio[725]: time="2023-12-06 20:17:00.726393826Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=8b35a310-7cf0-42f3-b777-a85672ee9c4a name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:17:00 default-k8s-diff-port-380424 crio[725]: time="2023-12-06 20:17:00.726507939Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=8b35a310-7cf0-42f3-b777-a85672ee9c4a name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:17:00 default-k8s-diff-port-380424 crio[725]: time="2023-12-06 20:17:00.726703924Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c9aadff3bd822709562dbf1a0ded031ba2c2ea54884c53d782071174d0738260,PodSandboxId:6c07cabd56c24c42465e45099899d24b36090c98f56a975138ad497c56a513e6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701892848807274583,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1def8b1-c6bb-48df-b2f2-34867a409cb7,},Annotations:map[string]string{io.kubernetes.container.hash: 11efe436,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdab86736d83b0ca2134e4add0ac6f9c66685fe48dfb3b07b5c77fed2f1448b0,PodSandboxId:7de0529ee18ead08da0f8418c465ad47a21bc3777030b903c0847bb4096b04c7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701892848479892462,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-khh5n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acac843d-9849-4bda-af66-2422b319665e,},Annotations:map[string]string{io.kubernetes.container.hash: 65741ac7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32578a0cf908fc0cb5caaac759149a35b7020bcc4fd563cc8be8358bbe3c5d4e,PodSandboxId:64620387bce08d831b42963f73dc797420c7eae9e8ef8b80bb047c163b1c855e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701892847807068316,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-x6p7t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de75d299-fede-4fe1-a748-31720acc76eb,},Annotations:map[string]string{io.kubernetes.container.hash: b38db4a4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae6ebd5fabd5ae8a42c7e81c50097899af6c1e0c0d32038ed24223f5dfd13f94,PodSandboxId:73ac3548d3b18a7d2de12f10c3fe5f31dc0728cab68014566bcc0aa6fba7c2b3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701892822761370901,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-380424,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4f020be2b72e6574
d4b4b145d3c3d20,},Annotations:map[string]string{io.kubernetes.container.hash: 9e075002,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23de0ede546b1ebf1e05556778d0bd15c476ba99f41924c568b5d9b445b97ffe,PodSandboxId:c1741aadbbce663c805c78d510a6fb88f97754a4368a621f144ef23a1cec3522,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701892822676317629,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-380424,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b3422bb291fb3c20
7445e0bd656b0c3,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45732ee62285b86989592cc56e3154151c04101ed8fe9b617ec01b515d05332f,PodSandboxId:b171f1df8871ec4eda57cf566603b0316772b0b5bd70edfc1f1b4edf157bb146,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701892822589934806,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-380424,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 3650c54206015f5f73ea260c72d54d27,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1559f7cdd0f70169ed3fd8c988f56860f427f6ecfeb7975274ee4bc105624b1,PodSandboxId:3309269f7ecf4bb8053c0e9db0065dceb4f52a49a2f3bceb720a9146be09149d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701892822375986892,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-380424,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 6e14bbf982dabaf9ba842eeced09bf9f,},Annotations:map[string]string{io.kubernetes.container.hash: a27e8ed2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8b35a310-7cf0-42f3-b777-a85672ee9c4a name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:17:00 default-k8s-diff-port-380424 crio[725]: time="2023-12-06 20:17:00.768846646Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=3138c906-94d3-42a3-950d-4b8829a20cf7 name=/runtime.v1.RuntimeService/Version
	Dec 06 20:17:00 default-k8s-diff-port-380424 crio[725]: time="2023-12-06 20:17:00.768934552Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=3138c906-94d3-42a3-950d-4b8829a20cf7 name=/runtime.v1.RuntimeService/Version
	Dec 06 20:17:00 default-k8s-diff-port-380424 crio[725]: time="2023-12-06 20:17:00.770276022Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=0d1b6623-aeed-494f-8171-145dd52f7480 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 20:17:00 default-k8s-diff-port-380424 crio[725]: time="2023-12-06 20:17:00.770767587Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701893820770752534,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=0d1b6623-aeed-494f-8171-145dd52f7480 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 20:17:00 default-k8s-diff-port-380424 crio[725]: time="2023-12-06 20:17:00.771706543Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=272c269c-910e-46cd-991f-d4f7fb90cb51 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:17:00 default-k8s-diff-port-380424 crio[725]: time="2023-12-06 20:17:00.771782171Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=272c269c-910e-46cd-991f-d4f7fb90cb51 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:17:00 default-k8s-diff-port-380424 crio[725]: time="2023-12-06 20:17:00.771970821Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c9aadff3bd822709562dbf1a0ded031ba2c2ea54884c53d782071174d0738260,PodSandboxId:6c07cabd56c24c42465e45099899d24b36090c98f56a975138ad497c56a513e6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701892848807274583,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1def8b1-c6bb-48df-b2f2-34867a409cb7,},Annotations:map[string]string{io.kubernetes.container.hash: 11efe436,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdab86736d83b0ca2134e4add0ac6f9c66685fe48dfb3b07b5c77fed2f1448b0,PodSandboxId:7de0529ee18ead08da0f8418c465ad47a21bc3777030b903c0847bb4096b04c7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701892848479892462,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-khh5n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acac843d-9849-4bda-af66-2422b319665e,},Annotations:map[string]string{io.kubernetes.container.hash: 65741ac7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32578a0cf908fc0cb5caaac759149a35b7020bcc4fd563cc8be8358bbe3c5d4e,PodSandboxId:64620387bce08d831b42963f73dc797420c7eae9e8ef8b80bb047c163b1c855e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701892847807068316,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-x6p7t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de75d299-fede-4fe1-a748-31720acc76eb,},Annotations:map[string]string{io.kubernetes.container.hash: b38db4a4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae6ebd5fabd5ae8a42c7e81c50097899af6c1e0c0d32038ed24223f5dfd13f94,PodSandboxId:73ac3548d3b18a7d2de12f10c3fe5f31dc0728cab68014566bcc0aa6fba7c2b3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701892822761370901,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-380424,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4f020be2b72e6574
d4b4b145d3c3d20,},Annotations:map[string]string{io.kubernetes.container.hash: 9e075002,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23de0ede546b1ebf1e05556778d0bd15c476ba99f41924c568b5d9b445b97ffe,PodSandboxId:c1741aadbbce663c805c78d510a6fb88f97754a4368a621f144ef23a1cec3522,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701892822676317629,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-380424,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b3422bb291fb3c20
7445e0bd656b0c3,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45732ee62285b86989592cc56e3154151c04101ed8fe9b617ec01b515d05332f,PodSandboxId:b171f1df8871ec4eda57cf566603b0316772b0b5bd70edfc1f1b4edf157bb146,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701892822589934806,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-380424,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 3650c54206015f5f73ea260c72d54d27,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1559f7cdd0f70169ed3fd8c988f56860f427f6ecfeb7975274ee4bc105624b1,PodSandboxId:3309269f7ecf4bb8053c0e9db0065dceb4f52a49a2f3bceb720a9146be09149d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701892822375986892,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-380424,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 6e14bbf982dabaf9ba842eeced09bf9f,},Annotations:map[string]string{io.kubernetes.container.hash: a27e8ed2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=272c269c-910e-46cd-991f-d4f7fb90cb51 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:17:00 default-k8s-diff-port-380424 crio[725]: time="2023-12-06 20:17:00.813365468Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=4e173897-de82-44ed-aa1d-e9125d7628b0 name=/runtime.v1.RuntimeService/Version
	Dec 06 20:17:00 default-k8s-diff-port-380424 crio[725]: time="2023-12-06 20:17:00.813503338Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=4e173897-de82-44ed-aa1d-e9125d7628b0 name=/runtime.v1.RuntimeService/Version
	Dec 06 20:17:00 default-k8s-diff-port-380424 crio[725]: time="2023-12-06 20:17:00.814932985Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=b1b243f4-7d60-456e-b0e7-c245c84efe62 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 20:17:00 default-k8s-diff-port-380424 crio[725]: time="2023-12-06 20:17:00.815370011Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701893820815356231,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=b1b243f4-7d60-456e-b0e7-c245c84efe62 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 20:17:00 default-k8s-diff-port-380424 crio[725]: time="2023-12-06 20:17:00.816220787Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6a8c7a6b-f3d3-4c5d-8a2b-ea8490a9e10a name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:17:00 default-k8s-diff-port-380424 crio[725]: time="2023-12-06 20:17:00.816295895Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6a8c7a6b-f3d3-4c5d-8a2b-ea8490a9e10a name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:17:00 default-k8s-diff-port-380424 crio[725]: time="2023-12-06 20:17:00.816531116Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c9aadff3bd822709562dbf1a0ded031ba2c2ea54884c53d782071174d0738260,PodSandboxId:6c07cabd56c24c42465e45099899d24b36090c98f56a975138ad497c56a513e6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701892848807274583,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1def8b1-c6bb-48df-b2f2-34867a409cb7,},Annotations:map[string]string{io.kubernetes.container.hash: 11efe436,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdab86736d83b0ca2134e4add0ac6f9c66685fe48dfb3b07b5c77fed2f1448b0,PodSandboxId:7de0529ee18ead08da0f8418c465ad47a21bc3777030b903c0847bb4096b04c7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701892848479892462,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-khh5n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acac843d-9849-4bda-af66-2422b319665e,},Annotations:map[string]string{io.kubernetes.container.hash: 65741ac7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32578a0cf908fc0cb5caaac759149a35b7020bcc4fd563cc8be8358bbe3c5d4e,PodSandboxId:64620387bce08d831b42963f73dc797420c7eae9e8ef8b80bb047c163b1c855e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701892847807068316,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-x6p7t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de75d299-fede-4fe1-a748-31720acc76eb,},Annotations:map[string]string{io.kubernetes.container.hash: b38db4a4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae6ebd5fabd5ae8a42c7e81c50097899af6c1e0c0d32038ed24223f5dfd13f94,PodSandboxId:73ac3548d3b18a7d2de12f10c3fe5f31dc0728cab68014566bcc0aa6fba7c2b3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701892822761370901,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-380424,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4f020be2b72e6574
d4b4b145d3c3d20,},Annotations:map[string]string{io.kubernetes.container.hash: 9e075002,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23de0ede546b1ebf1e05556778d0bd15c476ba99f41924c568b5d9b445b97ffe,PodSandboxId:c1741aadbbce663c805c78d510a6fb88f97754a4368a621f144ef23a1cec3522,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701892822676317629,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-380424,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b3422bb291fb3c20
7445e0bd656b0c3,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45732ee62285b86989592cc56e3154151c04101ed8fe9b617ec01b515d05332f,PodSandboxId:b171f1df8871ec4eda57cf566603b0316772b0b5bd70edfc1f1b4edf157bb146,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701892822589934806,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-380424,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 3650c54206015f5f73ea260c72d54d27,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1559f7cdd0f70169ed3fd8c988f56860f427f6ecfeb7975274ee4bc105624b1,PodSandboxId:3309269f7ecf4bb8053c0e9db0065dceb4f52a49a2f3bceb720a9146be09149d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701892822375986892,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-380424,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 6e14bbf982dabaf9ba842eeced09bf9f,},Annotations:map[string]string{io.kubernetes.container.hash: a27e8ed2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6a8c7a6b-f3d3-4c5d-8a2b-ea8490a9e10a name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c9aadff3bd822       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 minutes ago      Running             storage-provisioner       0                   6c07cabd56c24       storage-provisioner
	cdab86736d83b       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   16 minutes ago      Running             kube-proxy                0                   7de0529ee18ea       kube-proxy-khh5n
	32578a0cf908f       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   16 minutes ago      Running             coredns                   0                   64620387bce08       coredns-5dd5756b68-x6p7t
	ae6ebd5fabd5a       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   16 minutes ago      Running             etcd                      2                   73ac3548d3b18       etcd-default-k8s-diff-port-380424
	23de0ede546b1       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   16 minutes ago      Running             kube-scheduler            2                   c1741aadbbce6       kube-scheduler-default-k8s-diff-port-380424
	45732ee62285b       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   16 minutes ago      Running             kube-controller-manager   2                   b171f1df8871e       kube-controller-manager-default-k8s-diff-port-380424
	f1559f7cdd0f7       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   16 minutes ago      Running             kube-apiserver            2                   3309269f7ecf4       kube-apiserver-default-k8s-diff-port-380424
	
	* 
	* ==> coredns [32578a0cf908fc0cb5caaac759149a35b7020bcc4fd563cc8be8358bbe3c5d4e] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-diff-port-380424
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-380424
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=31a3600ce72029d920a55140bbc6d0705e357503
	                    minikube.k8s.io/name=default-k8s-diff-port-380424
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_06T20_00_30_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 06 Dec 2023 20:00:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-380424
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 06 Dec 2023 20:17:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 06 Dec 2023 20:16:10 +0000   Wed, 06 Dec 2023 20:00:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 06 Dec 2023 20:16:10 +0000   Wed, 06 Dec 2023 20:00:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 06 Dec 2023 20:16:10 +0000   Wed, 06 Dec 2023 20:00:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 06 Dec 2023 20:16:10 +0000   Wed, 06 Dec 2023 20:00:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.22
	  Hostname:    default-k8s-diff-port-380424
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 f8a1bdeb7e4d419e931c84253ccf1761
	  System UUID:                f8a1bdeb-7e4d-419e-931c-84253ccf1761
	  Boot ID:                    398861ae-9d73-4692-a98d-772a0cb22307
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-x6p7t                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-default-k8s-diff-port-380424                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kube-apiserver-default-k8s-diff-port-380424             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-380424    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-khh5n                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-default-k8s-diff-port-380424             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 metrics-server-57f55c9bc5-xpbtp                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16m                kube-proxy       
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node default-k8s-diff-port-380424 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node default-k8s-diff-port-380424 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node default-k8s-diff-port-380424 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  16m                kubelet          Node default-k8s-diff-port-380424 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m                kubelet          Node default-k8s-diff-port-380424 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m                kubelet          Node default-k8s-diff-port-380424 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           16m                node-controller  Node default-k8s-diff-port-380424 event: Registered Node default-k8s-diff-port-380424 in Controller
	
	* 
	* ==> dmesg <==
	* [Dec 6 19:55] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.067869] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.515663] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.529510] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.145082] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.495817] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.068363] systemd-fstab-generator[651]: Ignoring "noauto" for root device
	[  +0.127794] systemd-fstab-generator[662]: Ignoring "noauto" for root device
	[  +0.157712] systemd-fstab-generator[675]: Ignoring "noauto" for root device
	[  +0.102672] systemd-fstab-generator[686]: Ignoring "noauto" for root device
	[  +0.248073] systemd-fstab-generator[710]: Ignoring "noauto" for root device
	[ +17.797470] systemd-fstab-generator[924]: Ignoring "noauto" for root device
	[Dec 6 19:56] kauditd_printk_skb: 29 callbacks suppressed
	[Dec 6 20:00] systemd-fstab-generator[3533]: Ignoring "noauto" for root device
	[ +10.287204] systemd-fstab-generator[3865]: Ignoring "noauto" for root device
	[ +15.983459] kauditd_printk_skb: 2 callbacks suppressed
	
	* 
	* ==> etcd [ae6ebd5fabd5ae8a42c7e81c50097899af6c1e0c0d32038ed24223f5dfd13f94] <==
	* {"level":"info","ts":"2023-12-06T20:00:24.539562Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"80caca8c0a5d0f21 is starting a new election at term 1"}
	{"level":"info","ts":"2023-12-06T20:00:24.539692Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"80caca8c0a5d0f21 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-12-06T20:00:24.539739Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"80caca8c0a5d0f21 received MsgPreVoteResp from 80caca8c0a5d0f21 at term 1"}
	{"level":"info","ts":"2023-12-06T20:00:24.539778Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"80caca8c0a5d0f21 became candidate at term 2"}
	{"level":"info","ts":"2023-12-06T20:00:24.539813Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"80caca8c0a5d0f21 received MsgVoteResp from 80caca8c0a5d0f21 at term 2"}
	{"level":"info","ts":"2023-12-06T20:00:24.539849Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"80caca8c0a5d0f21 became leader at term 2"}
	{"level":"info","ts":"2023-12-06T20:00:24.539882Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 80caca8c0a5d0f21 elected leader 80caca8c0a5d0f21 at term 2"}
	{"level":"info","ts":"2023-12-06T20:00:24.543805Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"80caca8c0a5d0f21","local-member-attributes":"{Name:default-k8s-diff-port-380424 ClientURLs:[https://192.168.72.22:2379]}","request-path":"/0/members/80caca8c0a5d0f21/attributes","cluster-id":"ceec70a6b9eea11d","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-06T20:00:24.543901Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-06T20:00:24.545503Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.22:2379"}
	{"level":"info","ts":"2023-12-06T20:00:24.545547Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-06T20:00:24.550733Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-06T20:00:24.550802Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-06T20:00:24.546222Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-06T20:00:24.558006Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-06T20:00:24.588605Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ceec70a6b9eea11d","local-member-id":"80caca8c0a5d0f21","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-06T20:00:24.588876Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-06T20:00:24.588985Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-06T20:10:24.901527Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":714}
	{"level":"info","ts":"2023-12-06T20:10:24.904882Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":714,"took":"2.858428ms","hash":2891785748}
	{"level":"info","ts":"2023-12-06T20:10:24.904961Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2891785748,"revision":714,"compact-revision":-1}
	{"level":"info","ts":"2023-12-06T20:15:24.911754Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":957}
	{"level":"info","ts":"2023-12-06T20:15:24.914694Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":957,"took":"2.020322ms","hash":2546524456}
	{"level":"info","ts":"2023-12-06T20:15:24.914805Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2546524456,"revision":957,"compact-revision":714}
	{"level":"info","ts":"2023-12-06T20:15:42.745219Z","caller":"traceutil/trace.go:171","msg":"trace[65600514] transaction","detail":"{read_only:false; response_revision:1215; number_of_response:1; }","duration":"133.882126ms","start":"2023-12-06T20:15:42.611279Z","end":"2023-12-06T20:15:42.745162Z","steps":["trace[65600514] 'process raft request'  (duration: 133.756825ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  20:17:01 up 21 min,  0 users,  load average: 0.05, 0.23, 0.23
	Linux default-k8s-diff-port-380424 5.10.57 #1 SMP Fri Dec 1 04:24:04 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [f1559f7cdd0f70169ed3fd8c988f56860f427f6ecfeb7975274ee4bc105624b1] <==
	* W1206 20:13:27.875788       1 handler_proxy.go:93] no RequestInfo found in the context
	E1206 20:13:27.875913       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1206 20:13:27.875933       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1206 20:14:26.746020       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1206 20:15:26.746281       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1206 20:15:26.876794       1 handler_proxy.go:93] no RequestInfo found in the context
	E1206 20:15:26.877009       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1206 20:15:26.877849       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1206 20:15:27.877603       1 handler_proxy.go:93] no RequestInfo found in the context
	E1206 20:15:27.877700       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1206 20:15:27.877728       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1206 20:15:27.877796       1 handler_proxy.go:93] no RequestInfo found in the context
	E1206 20:15:27.877870       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1206 20:15:27.879072       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1206 20:16:26.746600       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1206 20:16:27.878179       1 handler_proxy.go:93] no RequestInfo found in the context
	E1206 20:16:27.878321       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1206 20:16:27.878356       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1206 20:16:27.879316       1 handler_proxy.go:93] no RequestInfo found in the context
	E1206 20:16:27.879421       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1206 20:16:27.879556       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [45732ee62285b86989592cc56e3154151c04101ed8fe9b617ec01b515d05332f] <==
	* I1206 20:11:14.533770       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1206 20:11:44.058198       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1206 20:11:44.542937       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1206 20:12:09.129616       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="348.001µs"
	E1206 20:12:14.065123       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1206 20:12:14.553402       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1206 20:12:24.116204       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="173.144µs"
	E1206 20:12:44.071659       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1206 20:12:44.564513       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1206 20:13:14.077653       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1206 20:13:14.573578       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1206 20:13:44.085112       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1206 20:13:44.582561       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1206 20:14:14.090957       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1206 20:14:14.592555       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1206 20:14:44.105984       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1206 20:14:44.602908       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1206 20:15:14.112998       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1206 20:15:14.616876       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1206 20:15:44.119908       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1206 20:15:44.626957       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1206 20:16:14.125703       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1206 20:16:14.636064       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1206 20:16:44.132675       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1206 20:16:44.645260       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [cdab86736d83b0ca2134e4add0ac6f9c66685fe48dfb3b07b5c77fed2f1448b0] <==
	* I1206 20:00:48.926614       1 server_others.go:69] "Using iptables proxy"
	I1206 20:00:48.963875       1 node.go:141] Successfully retrieved node IP: 192.168.72.22
	I1206 20:00:49.068654       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1206 20:00:49.068731       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1206 20:00:49.072621       1 server_others.go:152] "Using iptables Proxier"
	I1206 20:00:49.073603       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1206 20:00:49.073841       1 server.go:846] "Version info" version="v1.28.4"
	I1206 20:00:49.074035       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 20:00:49.080040       1 config.go:188] "Starting service config controller"
	I1206 20:00:49.080929       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1206 20:00:49.080985       1 config.go:97] "Starting endpoint slice config controller"
	I1206 20:00:49.081007       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1206 20:00:49.086669       1 config.go:315] "Starting node config controller"
	I1206 20:00:49.086720       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1206 20:00:49.181945       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1206 20:00:49.182016       1 shared_informer.go:318] Caches are synced for service config
	I1206 20:00:49.187014       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [23de0ede546b1ebf1e05556778d0bd15c476ba99f41924c568b5d9b445b97ffe] <==
	* W1206 20:00:27.916562       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1206 20:00:27.916651       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1206 20:00:28.022138       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1206 20:00:28.022247       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1206 20:00:28.049602       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1206 20:00:28.049717       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1206 20:00:28.050362       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1206 20:00:28.050501       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1206 20:00:28.061305       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1206 20:00:28.061408       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1206 20:00:28.140170       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1206 20:00:28.140285       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1206 20:00:28.197958       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1206 20:00:28.198149       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1206 20:00:28.322742       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1206 20:00:28.322812       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1206 20:00:28.383862       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1206 20:00:28.383927       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1206 20:00:28.484929       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1206 20:00:28.484966       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1206 20:00:28.487018       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1206 20:00:28.487177       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1206 20:00:28.533069       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1206 20:00:28.533233       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I1206 20:00:30.990651       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-12-06 19:55:16 UTC, ends at Wed 2023-12-06 20:17:01 UTC. --
	Dec 06 20:14:19 default-k8s-diff-port-380424 kubelet[3872]: E1206 20:14:19.099044    3872 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xpbtp" podUID="280fb2bc-d8d8-4684-8be1-ec0ace47ef77"
	Dec 06 20:14:31 default-k8s-diff-port-380424 kubelet[3872]: E1206 20:14:31.215892    3872 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 06 20:14:31 default-k8s-diff-port-380424 kubelet[3872]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 06 20:14:31 default-k8s-diff-port-380424 kubelet[3872]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 06 20:14:31 default-k8s-diff-port-380424 kubelet[3872]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 06 20:14:34 default-k8s-diff-port-380424 kubelet[3872]: E1206 20:14:34.098900    3872 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xpbtp" podUID="280fb2bc-d8d8-4684-8be1-ec0ace47ef77"
	Dec 06 20:14:49 default-k8s-diff-port-380424 kubelet[3872]: E1206 20:14:49.099768    3872 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xpbtp" podUID="280fb2bc-d8d8-4684-8be1-ec0ace47ef77"
	Dec 06 20:15:04 default-k8s-diff-port-380424 kubelet[3872]: E1206 20:15:04.099625    3872 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xpbtp" podUID="280fb2bc-d8d8-4684-8be1-ec0ace47ef77"
	Dec 06 20:15:18 default-k8s-diff-port-380424 kubelet[3872]: E1206 20:15:18.099314    3872 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xpbtp" podUID="280fb2bc-d8d8-4684-8be1-ec0ace47ef77"
	Dec 06 20:15:30 default-k8s-diff-port-380424 kubelet[3872]: E1206 20:15:30.099131    3872 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xpbtp" podUID="280fb2bc-d8d8-4684-8be1-ec0ace47ef77"
	Dec 06 20:15:31 default-k8s-diff-port-380424 kubelet[3872]: E1206 20:15:31.173319    3872 container_manager_linux.go:514] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service"
	Dec 06 20:15:31 default-k8s-diff-port-380424 kubelet[3872]: E1206 20:15:31.218122    3872 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 06 20:15:31 default-k8s-diff-port-380424 kubelet[3872]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 06 20:15:31 default-k8s-diff-port-380424 kubelet[3872]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 06 20:15:31 default-k8s-diff-port-380424 kubelet[3872]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 06 20:15:43 default-k8s-diff-port-380424 kubelet[3872]: E1206 20:15:43.098813    3872 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xpbtp" podUID="280fb2bc-d8d8-4684-8be1-ec0ace47ef77"
	Dec 06 20:15:55 default-k8s-diff-port-380424 kubelet[3872]: E1206 20:15:55.100000    3872 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xpbtp" podUID="280fb2bc-d8d8-4684-8be1-ec0ace47ef77"
	Dec 06 20:16:10 default-k8s-diff-port-380424 kubelet[3872]: E1206 20:16:10.099316    3872 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xpbtp" podUID="280fb2bc-d8d8-4684-8be1-ec0ace47ef77"
	Dec 06 20:16:21 default-k8s-diff-port-380424 kubelet[3872]: E1206 20:16:21.098902    3872 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xpbtp" podUID="280fb2bc-d8d8-4684-8be1-ec0ace47ef77"
	Dec 06 20:16:31 default-k8s-diff-port-380424 kubelet[3872]: E1206 20:16:31.215483    3872 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 06 20:16:31 default-k8s-diff-port-380424 kubelet[3872]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 06 20:16:31 default-k8s-diff-port-380424 kubelet[3872]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 06 20:16:31 default-k8s-diff-port-380424 kubelet[3872]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 06 20:16:36 default-k8s-diff-port-380424 kubelet[3872]: E1206 20:16:36.098822    3872 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xpbtp" podUID="280fb2bc-d8d8-4684-8be1-ec0ace47ef77"
	Dec 06 20:16:51 default-k8s-diff-port-380424 kubelet[3872]: E1206 20:16:51.098706    3872 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xpbtp" podUID="280fb2bc-d8d8-4684-8be1-ec0ace47ef77"
	
	* 
	* ==> storage-provisioner [c9aadff3bd822709562dbf1a0ded031ba2c2ea54884c53d782071174d0738260] <==
	* I1206 20:00:48.986771       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1206 20:00:49.005108       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1206 20:00:49.005413       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1206 20:00:49.022750       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1206 20:00:49.023096       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-380424_5d9f0bc5-eca4-46a5-be9a-f93670efd2e9!
	I1206 20:00:49.026161       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0b9a0450-fc18-4e96-8af1-f60dc2ead67b", APIVersion:"v1", ResourceVersion:"454", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-380424_5d9f0bc5-eca4-46a5-be9a-f93670efd2e9 became leader
	I1206 20:00:49.123761       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-380424_5d9f0bc5-eca4-46a5-be9a-f93670efd2e9!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-380424 -n default-k8s-diff-port-380424
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-380424 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-xpbtp
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-380424 describe pod metrics-server-57f55c9bc5-xpbtp
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-380424 describe pod metrics-server-57f55c9bc5-xpbtp: exit status 1 (65.90222ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-xpbtp" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-380424 describe pod metrics-server-57f55c9bc5-xpbtp: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (427.51s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (365.55s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-209025 -n embed-certs-209025
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-12-06 20:16:18.517431453 +0000 UTC m=+5752.601925281
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-209025 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-209025 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.08µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-209025 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-209025 -n embed-certs-209025
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-209025 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-209025 logs -n 25: (1.270641679s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-459609 sudo crio                             | bridge-459609                | jenkins | v1.32.0 | 06 Dec 23 19:46 UTC | 06 Dec 23 19:46 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-459609                                       | bridge-459609                | jenkins | v1.32.0 | 06 Dec 23 19:46 UTC | 06 Dec 23 19:46 UTC |
	| delete  | -p                                                     | disable-driver-mounts-730405 | jenkins | v1.32.0 | 06 Dec 23 19:46 UTC | 06 Dec 23 19:46 UTC |
	|         | disable-driver-mounts-730405                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-380424 | jenkins | v1.32.0 | 06 Dec 23 19:46 UTC | 06 Dec 23 19:48 UTC |
	|         | default-k8s-diff-port-380424                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-989559             | no-preload-989559            | jenkins | v1.32.0 | 06 Dec 23 19:47 UTC | 06 Dec 23 19:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-989559                                   | no-preload-989559            | jenkins | v1.32.0 | 06 Dec 23 19:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-448851        | old-k8s-version-448851       | jenkins | v1.32.0 | 06 Dec 23 19:47 UTC | 06 Dec 23 19:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-448851                              | old-k8s-version-448851       | jenkins | v1.32.0 | 06 Dec 23 19:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-380424  | default-k8s-diff-port-380424 | jenkins | v1.32.0 | 06 Dec 23 19:48 UTC | 06 Dec 23 19:48 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-380424 | jenkins | v1.32.0 | 06 Dec 23 19:48 UTC |                     |
	|         | default-k8s-diff-port-380424                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-209025            | embed-certs-209025           | jenkins | v1.32.0 | 06 Dec 23 19:48 UTC | 06 Dec 23 19:48 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-209025                                  | embed-certs-209025           | jenkins | v1.32.0 | 06 Dec 23 19:48 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-989559                  | no-preload-989559            | jenkins | v1.32.0 | 06 Dec 23 19:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-989559                                   | no-preload-989559            | jenkins | v1.32.0 | 06 Dec 23 19:50 UTC | 06 Dec 23 20:01 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.1                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-448851             | old-k8s-version-448851       | jenkins | v1.32.0 | 06 Dec 23 19:50 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-448851                              | old-k8s-version-448851       | jenkins | v1.32.0 | 06 Dec 23 19:50 UTC | 06 Dec 23 20:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-380424       | default-k8s-diff-port-380424 | jenkins | v1.32.0 | 06 Dec 23 19:50 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-209025                 | embed-certs-209025           | jenkins | v1.32.0 | 06 Dec 23 19:50 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-380424 | jenkins | v1.32.0 | 06 Dec 23 19:50 UTC | 06 Dec 23 20:00 UTC |
	|         | default-k8s-diff-port-380424                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-209025                                  | embed-certs-209025           | jenkins | v1.32.0 | 06 Dec 23 19:50 UTC | 06 Dec 23 20:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-448851                              | old-k8s-version-448851       | jenkins | v1.32.0 | 06 Dec 23 20:15 UTC | 06 Dec 23 20:15 UTC |
	| start   | -p newest-cni-347168 --memory=2200 --alsologtostderr   | newest-cni-347168            | jenkins | v1.32.0 | 06 Dec 23 20:15 UTC | 06 Dec 23 20:16 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.1                      |                              |         |         |                     |                     |
	| delete  | -p no-preload-989559                                   | no-preload-989559            | jenkins | v1.32.0 | 06 Dec 23 20:15 UTC | 06 Dec 23 20:15 UTC |
	| addons  | enable metrics-server -p newest-cni-347168             | newest-cni-347168            | jenkins | v1.32.0 | 06 Dec 23 20:16 UTC | 06 Dec 23 20:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-347168                                   | newest-cni-347168            | jenkins | v1.32.0 | 06 Dec 23 20:16 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/06 20:15:09
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 20:15:09.805224  120996 out.go:296] Setting OutFile to fd 1 ...
	I1206 20:15:09.805509  120996 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 20:15:09.805520  120996 out.go:309] Setting ErrFile to fd 2...
	I1206 20:15:09.805524  120996 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 20:15:09.805720  120996 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17740-63652/.minikube/bin
	I1206 20:15:09.806348  120996 out.go:303] Setting JSON to false
	I1206 20:15:09.807270  120996 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":10660,"bootTime":1701883050,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 20:15:09.807333  120996 start.go:138] virtualization: kvm guest
	I1206 20:15:09.809854  120996 out.go:177] * [newest-cni-347168] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1206 20:15:09.811393  120996 out.go:177]   - MINIKUBE_LOCATION=17740
	I1206 20:15:09.812932  120996 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 20:15:09.811424  120996 notify.go:220] Checking for updates...
	I1206 20:15:09.815815  120996 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17740-63652/kubeconfig
	I1206 20:15:09.817403  120996 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17740-63652/.minikube
	I1206 20:15:09.818874  120996 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 20:15:09.820369  120996 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 20:15:09.822395  120996 config.go:182] Loaded profile config "default-k8s-diff-port-380424": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 20:15:09.822498  120996 config.go:182] Loaded profile config "embed-certs-209025": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 20:15:09.822603  120996 config.go:182] Loaded profile config "no-preload-989559": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.1
	I1206 20:15:09.822725  120996 driver.go:392] Setting default libvirt URI to qemu:///system
	I1206 20:15:09.861615  120996 out.go:177] * Using the kvm2 driver based on user configuration
	I1206 20:15:09.863332  120996 start.go:298] selected driver: kvm2
	I1206 20:15:09.863353  120996 start.go:902] validating driver "kvm2" against <nil>
	I1206 20:15:09.863380  120996 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 20:15:09.864102  120996 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 20:15:09.864195  120996 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17740-63652/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1206 20:15:09.879735  120996 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1206 20:15:09.879783  120996 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	W1206 20:15:09.879805  120996 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1206 20:15:09.880097  120996 start_flags.go:950] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1206 20:15:09.880183  120996 cni.go:84] Creating CNI manager for ""
	I1206 20:15:09.880204  120996 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 20:15:09.880226  120996 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1206 20:15:09.880242  120996 start_flags.go:323] config:
	{Name:newest-cni-347168 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.1 ClusterName:newest-cni-347168 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 20:15:09.880418  120996 iso.go:125] acquiring lock: {Name:mk6e9c7dc90243dab7d2a6f322b4b6abe4dff6ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 20:15:09.882883  120996 out.go:177] * Starting control plane node newest-cni-347168 in cluster newest-cni-347168
	I1206 20:15:09.884341  120996 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime crio
	I1206 20:15:09.884386  120996 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I1206 20:15:09.884401  120996 cache.go:56] Caching tarball of preloaded images
	I1206 20:15:09.884535  120996 preload.go:174] Found /home/jenkins/minikube-integration/17740-63652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1206 20:15:09.884549  120996 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.1 on crio
	I1206 20:15:09.884667  120996 profile.go:148] Saving config to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/config.json ...
	I1206 20:15:09.884703  120996 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/config.json: {Name:mkc51a1c7ccc2567aa83707a3b832218332d0cac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 20:15:09.884894  120996 start.go:365] acquiring machines lock for newest-cni-347168: {Name:mk49ce640266d8c664a871ed4989f65c26b6fae1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1206 20:15:09.884933  120996 start.go:369] acquired machines lock for "newest-cni-347168" in 22.74µs
	I1206 20:15:09.884956  120996 start.go:93] Provisioning new machine with config: &{Name:newest-cni-347168 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.29.0-rc.1 ClusterName:newest-cni-347168 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 20:15:09.885048  120996 start.go:125] createHost starting for "" (driver="kvm2")
	I1206 20:15:09.886939  120996 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1206 20:15:09.887110  120996 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:15:09.887163  120996 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:15:09.902685  120996 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34801
	I1206 20:15:09.903118  120996 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:15:09.903749  120996 main.go:141] libmachine: Using API Version  1
	I1206 20:15:09.903771  120996 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:15:09.904154  120996 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:15:09.904366  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetMachineName
	I1206 20:15:09.904499  120996 main.go:141] libmachine: (newest-cni-347168) Calling .DriverName
	I1206 20:15:09.904692  120996 start.go:159] libmachine.API.Create for "newest-cni-347168" (driver="kvm2")
	I1206 20:15:09.904762  120996 client.go:168] LocalClient.Create starting
	I1206 20:15:09.904828  120996 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem
	I1206 20:15:09.904862  120996 main.go:141] libmachine: Decoding PEM data...
	I1206 20:15:09.904880  120996 main.go:141] libmachine: Parsing certificate...
	I1206 20:15:09.904944  120996 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem
	I1206 20:15:09.904961  120996 main.go:141] libmachine: Decoding PEM data...
	I1206 20:15:09.904976  120996 main.go:141] libmachine: Parsing certificate...
	I1206 20:15:09.904993  120996 main.go:141] libmachine: Running pre-create checks...
	I1206 20:15:09.905007  120996 main.go:141] libmachine: (newest-cni-347168) Calling .PreCreateCheck
	I1206 20:15:09.905441  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetConfigRaw
	I1206 20:15:09.905904  120996 main.go:141] libmachine: Creating machine...
	I1206 20:15:09.905926  120996 main.go:141] libmachine: (newest-cni-347168) Calling .Create
	I1206 20:15:09.906160  120996 main.go:141] libmachine: (newest-cni-347168) Creating KVM machine...
	I1206 20:15:09.907558  120996 main.go:141] libmachine: (newest-cni-347168) DBG | found existing default KVM network
	I1206 20:15:09.908771  120996 main.go:141] libmachine: (newest-cni-347168) DBG | I1206 20:15:09.908571  121019 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:6a:15:65} reservation:<nil>}
	I1206 20:15:09.909652  120996 main.go:141] libmachine: (newest-cni-347168) DBG | I1206 20:15:09.909565  121019 network.go:214] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:d1:51:aa} reservation:<nil>}
	I1206 20:15:09.910815  120996 main.go:141] libmachine: (newest-cni-347168) DBG | I1206 20:15:09.910704  121019 network.go:209] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001fcfb0}
	I1206 20:15:09.916826  120996 main.go:141] libmachine: (newest-cni-347168) DBG | trying to create private KVM network mk-newest-cni-347168 192.168.61.0/24...
	I1206 20:15:10.001011  120996 main.go:141] libmachine: (newest-cni-347168) DBG | private KVM network mk-newest-cni-347168 192.168.61.0/24 created
	I1206 20:15:10.001053  120996 main.go:141] libmachine: (newest-cni-347168) DBG | I1206 20:15:10.000937  121019 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17740-63652/.minikube
	I1206 20:15:10.001072  120996 main.go:141] libmachine: (newest-cni-347168) Setting up store path in /home/jenkins/minikube-integration/17740-63652/.minikube/machines/newest-cni-347168 ...
	I1206 20:15:10.001125  120996 main.go:141] libmachine: (newest-cni-347168) Building disk image from file:///home/jenkins/minikube-integration/17740-63652/.minikube/cache/iso/amd64/minikube-v1.32.1-1701387192-17703-amd64.iso
	I1206 20:15:10.001177  120996 main.go:141] libmachine: (newest-cni-347168) Downloading /home/jenkins/minikube-integration/17740-63652/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17740-63652/.minikube/cache/iso/amd64/minikube-v1.32.1-1701387192-17703-amd64.iso...
	I1206 20:15:10.243016  120996 main.go:141] libmachine: (newest-cni-347168) DBG | I1206 20:15:10.242863  121019 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/newest-cni-347168/id_rsa...
	I1206 20:15:10.293758  120996 main.go:141] libmachine: (newest-cni-347168) DBG | I1206 20:15:10.293630  121019 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/newest-cni-347168/newest-cni-347168.rawdisk...
	I1206 20:15:10.293791  120996 main.go:141] libmachine: (newest-cni-347168) DBG | Writing magic tar header
	I1206 20:15:10.293805  120996 main.go:141] libmachine: (newest-cni-347168) DBG | Writing SSH key tar header
	I1206 20:15:10.293814  120996 main.go:141] libmachine: (newest-cni-347168) DBG | I1206 20:15:10.293781  121019 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17740-63652/.minikube/machines/newest-cni-347168 ...
	I1206 20:15:10.293940  120996 main.go:141] libmachine: (newest-cni-347168) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/newest-cni-347168
	I1206 20:15:10.293981  120996 main.go:141] libmachine: (newest-cni-347168) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17740-63652/.minikube/machines
	I1206 20:15:10.293999  120996 main.go:141] libmachine: (newest-cni-347168) Setting executable bit set on /home/jenkins/minikube-integration/17740-63652/.minikube/machines/newest-cni-347168 (perms=drwx------)
	I1206 20:15:10.294014  120996 main.go:141] libmachine: (newest-cni-347168) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17740-63652/.minikube
	I1206 20:15:10.294031  120996 main.go:141] libmachine: (newest-cni-347168) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17740-63652
	I1206 20:15:10.294057  120996 main.go:141] libmachine: (newest-cni-347168) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1206 20:15:10.294074  120996 main.go:141] libmachine: (newest-cni-347168) DBG | Checking permissions on dir: /home/jenkins
	I1206 20:15:10.294090  120996 main.go:141] libmachine: (newest-cni-347168) Setting executable bit set on /home/jenkins/minikube-integration/17740-63652/.minikube/machines (perms=drwxr-xr-x)
	I1206 20:15:10.294110  120996 main.go:141] libmachine: (newest-cni-347168) Setting executable bit set on /home/jenkins/minikube-integration/17740-63652/.minikube (perms=drwxr-xr-x)
	I1206 20:15:10.294124  120996 main.go:141] libmachine: (newest-cni-347168) Setting executable bit set on /home/jenkins/minikube-integration/17740-63652 (perms=drwxrwxr-x)
	I1206 20:15:10.294139  120996 main.go:141] libmachine: (newest-cni-347168) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1206 20:15:10.294151  120996 main.go:141] libmachine: (newest-cni-347168) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1206 20:15:10.294165  120996 main.go:141] libmachine: (newest-cni-347168) DBG | Checking permissions on dir: /home
	I1206 20:15:10.294177  120996 main.go:141] libmachine: (newest-cni-347168) DBG | Skipping /home - not owner
	I1206 20:15:10.294190  120996 main.go:141] libmachine: (newest-cni-347168) Creating domain...
	I1206 20:15:10.295484  120996 main.go:141] libmachine: (newest-cni-347168) define libvirt domain using xml: 
	I1206 20:15:10.295514  120996 main.go:141] libmachine: (newest-cni-347168) <domain type='kvm'>
	I1206 20:15:10.295523  120996 main.go:141] libmachine: (newest-cni-347168)   <name>newest-cni-347168</name>
	I1206 20:15:10.295529  120996 main.go:141] libmachine: (newest-cni-347168)   <memory unit='MiB'>2200</memory>
	I1206 20:15:10.295535  120996 main.go:141] libmachine: (newest-cni-347168)   <vcpu>2</vcpu>
	I1206 20:15:10.295540  120996 main.go:141] libmachine: (newest-cni-347168)   <features>
	I1206 20:15:10.295546  120996 main.go:141] libmachine: (newest-cni-347168)     <acpi/>
	I1206 20:15:10.295559  120996 main.go:141] libmachine: (newest-cni-347168)     <apic/>
	I1206 20:15:10.295581  120996 main.go:141] libmachine: (newest-cni-347168)     <pae/>
	I1206 20:15:10.295594  120996 main.go:141] libmachine: (newest-cni-347168)     
	I1206 20:15:10.295603  120996 main.go:141] libmachine: (newest-cni-347168)   </features>
	I1206 20:15:10.295610  120996 main.go:141] libmachine: (newest-cni-347168)   <cpu mode='host-passthrough'>
	I1206 20:15:10.295624  120996 main.go:141] libmachine: (newest-cni-347168)   
	I1206 20:15:10.295634  120996 main.go:141] libmachine: (newest-cni-347168)   </cpu>
	I1206 20:15:10.295666  120996 main.go:141] libmachine: (newest-cni-347168)   <os>
	I1206 20:15:10.295693  120996 main.go:141] libmachine: (newest-cni-347168)     <type>hvm</type>
	I1206 20:15:10.295705  120996 main.go:141] libmachine: (newest-cni-347168)     <boot dev='cdrom'/>
	I1206 20:15:10.295748  120996 main.go:141] libmachine: (newest-cni-347168)     <boot dev='hd'/>
	I1206 20:15:10.295764  120996 main.go:141] libmachine: (newest-cni-347168)     <bootmenu enable='no'/>
	I1206 20:15:10.295788  120996 main.go:141] libmachine: (newest-cni-347168)   </os>
	I1206 20:15:10.295801  120996 main.go:141] libmachine: (newest-cni-347168)   <devices>
	I1206 20:15:10.295815  120996 main.go:141] libmachine: (newest-cni-347168)     <disk type='file' device='cdrom'>
	I1206 20:15:10.295837  120996 main.go:141] libmachine: (newest-cni-347168)       <source file='/home/jenkins/minikube-integration/17740-63652/.minikube/machines/newest-cni-347168/boot2docker.iso'/>
	I1206 20:15:10.295848  120996 main.go:141] libmachine: (newest-cni-347168)       <target dev='hdc' bus='scsi'/>
	I1206 20:15:10.295861  120996 main.go:141] libmachine: (newest-cni-347168)       <readonly/>
	I1206 20:15:10.295872  120996 main.go:141] libmachine: (newest-cni-347168)     </disk>
	I1206 20:15:10.295886  120996 main.go:141] libmachine: (newest-cni-347168)     <disk type='file' device='disk'>
	I1206 20:15:10.295904  120996 main.go:141] libmachine: (newest-cni-347168)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1206 20:15:10.295923  120996 main.go:141] libmachine: (newest-cni-347168)       <source file='/home/jenkins/minikube-integration/17740-63652/.minikube/machines/newest-cni-347168/newest-cni-347168.rawdisk'/>
	I1206 20:15:10.295936  120996 main.go:141] libmachine: (newest-cni-347168)       <target dev='hda' bus='virtio'/>
	I1206 20:15:10.295949  120996 main.go:141] libmachine: (newest-cni-347168)     </disk>
	I1206 20:15:10.295958  120996 main.go:141] libmachine: (newest-cni-347168)     <interface type='network'>
	I1206 20:15:10.295982  120996 main.go:141] libmachine: (newest-cni-347168)       <source network='mk-newest-cni-347168'/>
	I1206 20:15:10.295999  120996 main.go:141] libmachine: (newest-cni-347168)       <model type='virtio'/>
	I1206 20:15:10.296069  120996 main.go:141] libmachine: (newest-cni-347168)     </interface>
	I1206 20:15:10.296096  120996 main.go:141] libmachine: (newest-cni-347168)     <interface type='network'>
	I1206 20:15:10.296114  120996 main.go:141] libmachine: (newest-cni-347168)       <source network='default'/>
	I1206 20:15:10.296123  120996 main.go:141] libmachine: (newest-cni-347168)       <model type='virtio'/>
	I1206 20:15:10.296133  120996 main.go:141] libmachine: (newest-cni-347168)     </interface>
	I1206 20:15:10.296142  120996 main.go:141] libmachine: (newest-cni-347168)     <serial type='pty'>
	I1206 20:15:10.296151  120996 main.go:141] libmachine: (newest-cni-347168)       <target port='0'/>
	I1206 20:15:10.296158  120996 main.go:141] libmachine: (newest-cni-347168)     </serial>
	I1206 20:15:10.296167  120996 main.go:141] libmachine: (newest-cni-347168)     <console type='pty'>
	I1206 20:15:10.296175  120996 main.go:141] libmachine: (newest-cni-347168)       <target type='serial' port='0'/>
	I1206 20:15:10.296184  120996 main.go:141] libmachine: (newest-cni-347168)     </console>
	I1206 20:15:10.296192  120996 main.go:141] libmachine: (newest-cni-347168)     <rng model='virtio'>
	I1206 20:15:10.296204  120996 main.go:141] libmachine: (newest-cni-347168)       <backend model='random'>/dev/random</backend>
	I1206 20:15:10.296211  120996 main.go:141] libmachine: (newest-cni-347168)     </rng>
	I1206 20:15:10.296220  120996 main.go:141] libmachine: (newest-cni-347168)     
	I1206 20:15:10.296234  120996 main.go:141] libmachine: (newest-cni-347168)     
	I1206 20:15:10.296244  120996 main.go:141] libmachine: (newest-cni-347168)   </devices>
	I1206 20:15:10.296252  120996 main.go:141] libmachine: (newest-cni-347168) </domain>
	I1206 20:15:10.296280  120996 main.go:141] libmachine: (newest-cni-347168) 
	I1206 20:15:10.300528  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:7f:92:13 in network default
	I1206 20:15:10.301121  120996 main.go:141] libmachine: (newest-cni-347168) Ensuring networks are active...
	I1206 20:15:10.301154  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:10.301898  120996 main.go:141] libmachine: (newest-cni-347168) Ensuring network default is active
	I1206 20:15:10.302202  120996 main.go:141] libmachine: (newest-cni-347168) Ensuring network mk-newest-cni-347168 is active
	I1206 20:15:10.302641  120996 main.go:141] libmachine: (newest-cni-347168) Getting domain xml...
	I1206 20:15:10.303450  120996 main.go:141] libmachine: (newest-cni-347168) Creating domain...
	I1206 20:15:11.631063  120996 main.go:141] libmachine: (newest-cni-347168) Waiting to get IP...
	I1206 20:15:11.631867  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:11.632488  120996 main.go:141] libmachine: (newest-cni-347168) DBG | unable to find current IP address of domain newest-cni-347168 in network mk-newest-cni-347168
	I1206 20:15:11.632520  120996 main.go:141] libmachine: (newest-cni-347168) DBG | I1206 20:15:11.632443  121019 retry.go:31] will retry after 233.957525ms: waiting for machine to come up
	I1206 20:15:11.867869  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:11.868462  120996 main.go:141] libmachine: (newest-cni-347168) DBG | unable to find current IP address of domain newest-cni-347168 in network mk-newest-cni-347168
	I1206 20:15:11.868491  120996 main.go:141] libmachine: (newest-cni-347168) DBG | I1206 20:15:11.868395  121019 retry.go:31] will retry after 255.274669ms: waiting for machine to come up
	I1206 20:15:12.124876  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:12.125472  120996 main.go:141] libmachine: (newest-cni-347168) DBG | unable to find current IP address of domain newest-cni-347168 in network mk-newest-cni-347168
	I1206 20:15:12.125503  120996 main.go:141] libmachine: (newest-cni-347168) DBG | I1206 20:15:12.125411  121019 retry.go:31] will retry after 349.317013ms: waiting for machine to come up
	I1206 20:15:12.475860  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:12.476566  120996 main.go:141] libmachine: (newest-cni-347168) DBG | unable to find current IP address of domain newest-cni-347168 in network mk-newest-cni-347168
	I1206 20:15:12.476599  120996 main.go:141] libmachine: (newest-cni-347168) DBG | I1206 20:15:12.476497  121019 retry.go:31] will retry after 416.403168ms: waiting for machine to come up
	I1206 20:15:12.894125  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:12.894686  120996 main.go:141] libmachine: (newest-cni-347168) DBG | unable to find current IP address of domain newest-cni-347168 in network mk-newest-cni-347168
	I1206 20:15:12.894709  120996 main.go:141] libmachine: (newest-cni-347168) DBG | I1206 20:15:12.894603  121019 retry.go:31] will retry after 608.573742ms: waiting for machine to come up
	I1206 20:15:13.504176  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:13.504628  120996 main.go:141] libmachine: (newest-cni-347168) DBG | unable to find current IP address of domain newest-cni-347168 in network mk-newest-cni-347168
	I1206 20:15:13.504660  120996 main.go:141] libmachine: (newest-cni-347168) DBG | I1206 20:15:13.504560  121019 retry.go:31] will retry after 646.189699ms: waiting for machine to come up
	I1206 20:15:14.152435  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:14.152802  120996 main.go:141] libmachine: (newest-cni-347168) DBG | unable to find current IP address of domain newest-cni-347168 in network mk-newest-cni-347168
	I1206 20:15:14.152825  120996 main.go:141] libmachine: (newest-cni-347168) DBG | I1206 20:15:14.152756  121019 retry.go:31] will retry after 961.404409ms: waiting for machine to come up
	I1206 20:15:15.115574  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:15.116051  120996 main.go:141] libmachine: (newest-cni-347168) DBG | unable to find current IP address of domain newest-cni-347168 in network mk-newest-cni-347168
	I1206 20:15:15.116073  120996 main.go:141] libmachine: (newest-cni-347168) DBG | I1206 20:15:15.115993  121019 retry.go:31] will retry after 1.329333828s: waiting for machine to come up
	I1206 20:15:16.447315  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:16.447883  120996 main.go:141] libmachine: (newest-cni-347168) DBG | unable to find current IP address of domain newest-cni-347168 in network mk-newest-cni-347168
	I1206 20:15:16.447925  120996 main.go:141] libmachine: (newest-cni-347168) DBG | I1206 20:15:16.447841  121019 retry.go:31] will retry after 1.448183792s: waiting for machine to come up
	I1206 20:15:17.898296  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:17.898794  120996 main.go:141] libmachine: (newest-cni-347168) DBG | unable to find current IP address of domain newest-cni-347168 in network mk-newest-cni-347168
	I1206 20:15:17.898835  120996 main.go:141] libmachine: (newest-cni-347168) DBG | I1206 20:15:17.898770  121019 retry.go:31] will retry after 1.963121871s: waiting for machine to come up
	I1206 20:15:19.863330  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:19.863874  120996 main.go:141] libmachine: (newest-cni-347168) DBG | unable to find current IP address of domain newest-cni-347168 in network mk-newest-cni-347168
	I1206 20:15:19.863907  120996 main.go:141] libmachine: (newest-cni-347168) DBG | I1206 20:15:19.863824  121019 retry.go:31] will retry after 1.863190443s: waiting for machine to come up
	I1206 20:15:21.729550  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:21.730063  120996 main.go:141] libmachine: (newest-cni-347168) DBG | unable to find current IP address of domain newest-cni-347168 in network mk-newest-cni-347168
	I1206 20:15:21.730098  120996 main.go:141] libmachine: (newest-cni-347168) DBG | I1206 20:15:21.730003  121019 retry.go:31] will retry after 3.534433438s: waiting for machine to come up
	I1206 20:15:25.266286  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:25.266770  120996 main.go:141] libmachine: (newest-cni-347168) DBG | unable to find current IP address of domain newest-cni-347168 in network mk-newest-cni-347168
	I1206 20:15:25.266793  120996 main.go:141] libmachine: (newest-cni-347168) DBG | I1206 20:15:25.266731  121019 retry.go:31] will retry after 3.268833182s: waiting for machine to come up
	I1206 20:15:28.538314  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:28.538836  120996 main.go:141] libmachine: (newest-cni-347168) DBG | unable to find current IP address of domain newest-cni-347168 in network mk-newest-cni-347168
	I1206 20:15:28.538866  120996 main.go:141] libmachine: (newest-cni-347168) DBG | I1206 20:15:28.538774  121019 retry.go:31] will retry after 4.552063341s: waiting for machine to come up
	I1206 20:15:33.094236  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:33.094859  120996 main.go:141] libmachine: (newest-cni-347168) Found IP for machine: 192.168.61.192
	I1206 20:15:33.094891  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has current primary IP address 192.168.61.192 and MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:33.094903  120996 main.go:141] libmachine: (newest-cni-347168) Reserving static IP address...
	I1206 20:15:33.095318  120996 main.go:141] libmachine: (newest-cni-347168) DBG | unable to find host DHCP lease matching {name: "newest-cni-347168", mac: "52:54:00:11:9b:a6", ip: "192.168.61.192"} in network mk-newest-cni-347168
	I1206 20:15:33.176566  120996 main.go:141] libmachine: (newest-cni-347168) DBG | Getting to WaitForSSH function...
	I1206 20:15:33.176603  120996 main.go:141] libmachine: (newest-cni-347168) Reserved static IP address: 192.168.61.192
	I1206 20:15:33.176620  120996 main.go:141] libmachine: (newest-cni-347168) Waiting for SSH to be available...
	I1206 20:15:33.179571  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:33.180101  120996 main.go:141] libmachine: (newest-cni-347168) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:9b:a6", ip: ""} in network mk-newest-cni-347168: {Iface:virbr4 ExpiryTime:2023-12-06 21:15:26 +0000 UTC Type:0 Mac:52:54:00:11:9b:a6 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:minikube Clientid:01:52:54:00:11:9b:a6}
	I1206 20:15:33.180146  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined IP address 192.168.61.192 and MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:33.180242  120996 main.go:141] libmachine: (newest-cni-347168) DBG | Using SSH client type: external
	I1206 20:15:33.180273  120996 main.go:141] libmachine: (newest-cni-347168) DBG | Using SSH private key: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/newest-cni-347168/id_rsa (-rw-------)
	I1206 20:15:33.180316  120996 main.go:141] libmachine: (newest-cni-347168) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.192 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17740-63652/.minikube/machines/newest-cni-347168/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1206 20:15:33.180335  120996 main.go:141] libmachine: (newest-cni-347168) DBG | About to run SSH command:
	I1206 20:15:33.180354  120996 main.go:141] libmachine: (newest-cni-347168) DBG | exit 0
	I1206 20:15:33.269146  120996 main.go:141] libmachine: (newest-cni-347168) DBG | SSH cmd err, output: <nil>: 
	I1206 20:15:33.269444  120996 main.go:141] libmachine: (newest-cni-347168) KVM machine creation complete!
	I1206 20:15:33.269829  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetConfigRaw
	I1206 20:15:33.270405  120996 main.go:141] libmachine: (newest-cni-347168) Calling .DriverName
	I1206 20:15:33.270633  120996 main.go:141] libmachine: (newest-cni-347168) Calling .DriverName
	I1206 20:15:33.270822  120996 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1206 20:15:33.270835  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetState
	I1206 20:15:33.272293  120996 main.go:141] libmachine: Detecting operating system of created instance...
	I1206 20:15:33.272342  120996 main.go:141] libmachine: Waiting for SSH to be available...
	I1206 20:15:33.272355  120996 main.go:141] libmachine: Getting to WaitForSSH function...
	I1206 20:15:33.272365  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHHostname
	I1206 20:15:33.275189  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:33.275639  120996 main.go:141] libmachine: (newest-cni-347168) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:9b:a6", ip: ""} in network mk-newest-cni-347168: {Iface:virbr4 ExpiryTime:2023-12-06 21:15:26 +0000 UTC Type:0 Mac:52:54:00:11:9b:a6 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-347168 Clientid:01:52:54:00:11:9b:a6}
	I1206 20:15:33.275661  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined IP address 192.168.61.192 and MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:33.275861  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHPort
	I1206 20:15:33.276078  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHKeyPath
	I1206 20:15:33.276274  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHKeyPath
	I1206 20:15:33.276436  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHUsername
	I1206 20:15:33.276619  120996 main.go:141] libmachine: Using SSH client type: native
	I1206 20:15:33.277063  120996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.192 22 <nil> <nil>}
	I1206 20:15:33.277084  120996 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1206 20:15:33.396625  120996 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1206 20:15:33.396664  120996 main.go:141] libmachine: Detecting the provisioner...
	I1206 20:15:33.396673  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHHostname
	I1206 20:15:33.399852  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:33.400190  120996 main.go:141] libmachine: (newest-cni-347168) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:9b:a6", ip: ""} in network mk-newest-cni-347168: {Iface:virbr4 ExpiryTime:2023-12-06 21:15:26 +0000 UTC Type:0 Mac:52:54:00:11:9b:a6 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-347168 Clientid:01:52:54:00:11:9b:a6}
	I1206 20:15:33.400224  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined IP address 192.168.61.192 and MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:33.400361  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHPort
	I1206 20:15:33.400593  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHKeyPath
	I1206 20:15:33.400784  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHKeyPath
	I1206 20:15:33.400971  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHUsername
	I1206 20:15:33.401166  120996 main.go:141] libmachine: Using SSH client type: native
	I1206 20:15:33.401629  120996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.192 22 <nil> <nil>}
	I1206 20:15:33.401646  120996 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1206 20:15:33.527309  120996 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gf888a99-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1206 20:15:33.527418  120996 main.go:141] libmachine: found compatible host: buildroot
	I1206 20:15:33.527427  120996 main.go:141] libmachine: Provisioning with buildroot...
	I1206 20:15:33.527434  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetMachineName
	I1206 20:15:33.527777  120996 buildroot.go:166] provisioning hostname "newest-cni-347168"
	I1206 20:15:33.527818  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetMachineName
	I1206 20:15:33.528027  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHHostname
	I1206 20:15:33.530841  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:33.531228  120996 main.go:141] libmachine: (newest-cni-347168) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:9b:a6", ip: ""} in network mk-newest-cni-347168: {Iface:virbr4 ExpiryTime:2023-12-06 21:15:26 +0000 UTC Type:0 Mac:52:54:00:11:9b:a6 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-347168 Clientid:01:52:54:00:11:9b:a6}
	I1206 20:15:33.531280  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined IP address 192.168.61.192 and MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:33.531377  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHPort
	I1206 20:15:33.531609  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHKeyPath
	I1206 20:15:33.531813  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHKeyPath
	I1206 20:15:33.532007  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHUsername
	I1206 20:15:33.532266  120996 main.go:141] libmachine: Using SSH client type: native
	I1206 20:15:33.532677  120996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.192 22 <nil> <nil>}
	I1206 20:15:33.532700  120996 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-347168 && echo "newest-cni-347168" | sudo tee /etc/hostname
	I1206 20:15:33.662449  120996 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-347168
	
	I1206 20:15:33.662483  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHHostname
	I1206 20:15:33.665436  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:33.665800  120996 main.go:141] libmachine: (newest-cni-347168) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:9b:a6", ip: ""} in network mk-newest-cni-347168: {Iface:virbr4 ExpiryTime:2023-12-06 21:15:26 +0000 UTC Type:0 Mac:52:54:00:11:9b:a6 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-347168 Clientid:01:52:54:00:11:9b:a6}
	I1206 20:15:33.665846  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined IP address 192.168.61.192 and MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:33.665981  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHPort
	I1206 20:15:33.666218  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHKeyPath
	I1206 20:15:33.666403  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHKeyPath
	I1206 20:15:33.666527  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHUsername
	I1206 20:15:33.666696  120996 main.go:141] libmachine: Using SSH client type: native
	I1206 20:15:33.667172  120996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.192 22 <nil> <nil>}
	I1206 20:15:33.667192  120996 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-347168' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-347168/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-347168' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 20:15:33.796492  120996 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1206 20:15:33.796531  120996 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17740-63652/.minikube CaCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17740-63652/.minikube}
	I1206 20:15:33.796567  120996 buildroot.go:174] setting up certificates
	I1206 20:15:33.796589  120996 provision.go:83] configureAuth start
	I1206 20:15:33.796604  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetMachineName
	I1206 20:15:33.796964  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetIP
	I1206 20:15:33.799993  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:33.800370  120996 main.go:141] libmachine: (newest-cni-347168) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:9b:a6", ip: ""} in network mk-newest-cni-347168: {Iface:virbr4 ExpiryTime:2023-12-06 21:15:26 +0000 UTC Type:0 Mac:52:54:00:11:9b:a6 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-347168 Clientid:01:52:54:00:11:9b:a6}
	I1206 20:15:33.800403  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined IP address 192.168.61.192 and MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:33.800521  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHHostname
	I1206 20:15:33.802989  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:33.803300  120996 main.go:141] libmachine: (newest-cni-347168) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:9b:a6", ip: ""} in network mk-newest-cni-347168: {Iface:virbr4 ExpiryTime:2023-12-06 21:15:26 +0000 UTC Type:0 Mac:52:54:00:11:9b:a6 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-347168 Clientid:01:52:54:00:11:9b:a6}
	I1206 20:15:33.803341  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined IP address 192.168.61.192 and MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:33.803478  120996 provision.go:138] copyHostCerts
	I1206 20:15:33.803571  120996 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem, removing ...
	I1206 20:15:33.803603  120996 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem
	I1206 20:15:33.803687  120996 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem (1082 bytes)
	I1206 20:15:33.803858  120996 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem, removing ...
	I1206 20:15:33.803869  120996 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem
	I1206 20:15:33.803910  120996 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem (1123 bytes)
	I1206 20:15:33.804042  120996 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem, removing ...
	I1206 20:15:33.804091  120996 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem
	I1206 20:15:33.804141  120996 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem (1679 bytes)
	I1206 20:15:33.804214  120996 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem org=jenkins.newest-cni-347168 san=[192.168.61.192 192.168.61.192 localhost 127.0.0.1 minikube newest-cni-347168]
	I1206 20:15:33.994563  120996 provision.go:172] copyRemoteCerts
	I1206 20:15:33.994644  120996 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 20:15:33.994682  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHHostname
	I1206 20:15:33.997818  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:33.998118  120996 main.go:141] libmachine: (newest-cni-347168) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:9b:a6", ip: ""} in network mk-newest-cni-347168: {Iface:virbr4 ExpiryTime:2023-12-06 21:15:26 +0000 UTC Type:0 Mac:52:54:00:11:9b:a6 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-347168 Clientid:01:52:54:00:11:9b:a6}
	I1206 20:15:33.998153  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined IP address 192.168.61.192 and MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:33.998411  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHPort
	I1206 20:15:33.998612  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHKeyPath
	I1206 20:15:33.998774  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHUsername
	I1206 20:15:33.998935  120996 sshutil.go:53] new ssh client: &{IP:192.168.61.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/newest-cni-347168/id_rsa Username:docker}
	I1206 20:15:34.091615  120996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 20:15:34.118438  120996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1206 20:15:34.145084  120996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 20:15:34.170898  120996 provision.go:86] duration metric: configureAuth took 374.286079ms
	I1206 20:15:34.170929  120996 buildroot.go:189] setting minikube options for container-runtime
	I1206 20:15:34.171164  120996 config.go:182] Loaded profile config "newest-cni-347168": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.1
	I1206 20:15:34.171268  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHHostname
	I1206 20:15:34.174189  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:34.174600  120996 main.go:141] libmachine: (newest-cni-347168) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:9b:a6", ip: ""} in network mk-newest-cni-347168: {Iface:virbr4 ExpiryTime:2023-12-06 21:15:26 +0000 UTC Type:0 Mac:52:54:00:11:9b:a6 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-347168 Clientid:01:52:54:00:11:9b:a6}
	I1206 20:15:34.174628  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined IP address 192.168.61.192 and MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:34.174785  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHPort
	I1206 20:15:34.174985  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHKeyPath
	I1206 20:15:34.175141  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHKeyPath
	I1206 20:15:34.175338  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHUsername
	I1206 20:15:34.175523  120996 main.go:141] libmachine: Using SSH client type: native
	I1206 20:15:34.175843  120996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.192 22 <nil> <nil>}
	I1206 20:15:34.175862  120996 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 20:15:34.505869  120996 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 20:15:34.505897  120996 main.go:141] libmachine: Checking connection to Docker...
	I1206 20:15:34.505925  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetURL
	I1206 20:15:34.507244  120996 main.go:141] libmachine: (newest-cni-347168) DBG | Using libvirt version 6000000
	I1206 20:15:34.509869  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:34.510193  120996 main.go:141] libmachine: (newest-cni-347168) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:9b:a6", ip: ""} in network mk-newest-cni-347168: {Iface:virbr4 ExpiryTime:2023-12-06 21:15:26 +0000 UTC Type:0 Mac:52:54:00:11:9b:a6 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-347168 Clientid:01:52:54:00:11:9b:a6}
	I1206 20:15:34.510223  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined IP address 192.168.61.192 and MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:34.510381  120996 main.go:141] libmachine: Docker is up and running!
	I1206 20:15:34.510395  120996 main.go:141] libmachine: Reticulating splines...
	I1206 20:15:34.510402  120996 client.go:171] LocalClient.Create took 24.605627718s
	I1206 20:15:34.510422  120996 start.go:167] duration metric: libmachine.API.Create for "newest-cni-347168" took 24.605732185s
	I1206 20:15:34.510431  120996 start.go:300] post-start starting for "newest-cni-347168" (driver="kvm2")
	I1206 20:15:34.510441  120996 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 20:15:34.510457  120996 main.go:141] libmachine: (newest-cni-347168) Calling .DriverName
	I1206 20:15:34.510730  120996 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 20:15:34.510761  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHHostname
	I1206 20:15:34.512910  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:34.513206  120996 main.go:141] libmachine: (newest-cni-347168) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:9b:a6", ip: ""} in network mk-newest-cni-347168: {Iface:virbr4 ExpiryTime:2023-12-06 21:15:26 +0000 UTC Type:0 Mac:52:54:00:11:9b:a6 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-347168 Clientid:01:52:54:00:11:9b:a6}
	I1206 20:15:34.513248  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined IP address 192.168.61.192 and MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:34.513417  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHPort
	I1206 20:15:34.513618  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHKeyPath
	I1206 20:15:34.513799  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHUsername
	I1206 20:15:34.513964  120996 sshutil.go:53] new ssh client: &{IP:192.168.61.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/newest-cni-347168/id_rsa Username:docker}
	I1206 20:15:34.602772  120996 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 20:15:34.607707  120996 info.go:137] Remote host: Buildroot 2021.02.12
	I1206 20:15:34.607747  120996 filesync.go:126] Scanning /home/jenkins/minikube-integration/17740-63652/.minikube/addons for local assets ...
	I1206 20:15:34.607827  120996 filesync.go:126] Scanning /home/jenkins/minikube-integration/17740-63652/.minikube/files for local assets ...
	I1206 20:15:34.607921  120996 filesync.go:149] local asset: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem -> 708342.pem in /etc/ssl/certs
	I1206 20:15:34.608034  120996 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 20:15:34.617266  120996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem --> /etc/ssl/certs/708342.pem (1708 bytes)
	I1206 20:15:34.642598  120996 start.go:303] post-start completed in 132.153683ms
	I1206 20:15:34.642655  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetConfigRaw
	I1206 20:15:34.643248  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetIP
	I1206 20:15:34.645908  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:34.646216  120996 main.go:141] libmachine: (newest-cni-347168) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:9b:a6", ip: ""} in network mk-newest-cni-347168: {Iface:virbr4 ExpiryTime:2023-12-06 21:15:26 +0000 UTC Type:0 Mac:52:54:00:11:9b:a6 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-347168 Clientid:01:52:54:00:11:9b:a6}
	I1206 20:15:34.646250  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined IP address 192.168.61.192 and MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:34.646495  120996 profile.go:148] Saving config to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/config.json ...
	I1206 20:15:34.646667  120996 start.go:128] duration metric: createHost completed in 24.7616076s
	I1206 20:15:34.646690  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHHostname
	I1206 20:15:34.649005  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:34.649396  120996 main.go:141] libmachine: (newest-cni-347168) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:9b:a6", ip: ""} in network mk-newest-cni-347168: {Iface:virbr4 ExpiryTime:2023-12-06 21:15:26 +0000 UTC Type:0 Mac:52:54:00:11:9b:a6 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-347168 Clientid:01:52:54:00:11:9b:a6}
	I1206 20:15:34.649427  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined IP address 192.168.61.192 and MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:34.649582  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHPort
	I1206 20:15:34.649793  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHKeyPath
	I1206 20:15:34.649962  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHKeyPath
	I1206 20:15:34.650115  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHUsername
	I1206 20:15:34.650296  120996 main.go:141] libmachine: Using SSH client type: native
	I1206 20:15:34.650651  120996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.192 22 <nil> <nil>}
	I1206 20:15:34.650665  120996 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1206 20:15:34.770239  120996 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701893734.748854790
	
	I1206 20:15:34.770269  120996 fix.go:206] guest clock: 1701893734.748854790
	I1206 20:15:34.770279  120996 fix.go:219] Guest: 2023-12-06 20:15:34.74885479 +0000 UTC Remote: 2023-12-06 20:15:34.646679476 +0000 UTC m=+24.893998228 (delta=102.175314ms)
	I1206 20:15:34.770307  120996 fix.go:190] guest clock delta is within tolerance: 102.175314ms
	I1206 20:15:34.770313  120996 start.go:83] releasing machines lock for "newest-cni-347168", held for 24.885371157s
	I1206 20:15:34.770338  120996 main.go:141] libmachine: (newest-cni-347168) Calling .DriverName
	I1206 20:15:34.770693  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetIP
	I1206 20:15:34.773617  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:34.774159  120996 main.go:141] libmachine: (newest-cni-347168) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:9b:a6", ip: ""} in network mk-newest-cni-347168: {Iface:virbr4 ExpiryTime:2023-12-06 21:15:26 +0000 UTC Type:0 Mac:52:54:00:11:9b:a6 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-347168 Clientid:01:52:54:00:11:9b:a6}
	I1206 20:15:34.774191  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined IP address 192.168.61.192 and MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:34.774423  120996 main.go:141] libmachine: (newest-cni-347168) Calling .DriverName
	I1206 20:15:34.775037  120996 main.go:141] libmachine: (newest-cni-347168) Calling .DriverName
	I1206 20:15:34.775241  120996 main.go:141] libmachine: (newest-cni-347168) Calling .DriverName
	I1206 20:15:34.775404  120996 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 20:15:34.775472  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHHostname
	I1206 20:15:34.775508  120996 ssh_runner.go:195] Run: cat /version.json
	I1206 20:15:34.775536  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHHostname
	I1206 20:15:34.778593  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:34.778852  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:34.779035  120996 main.go:141] libmachine: (newest-cni-347168) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:9b:a6", ip: ""} in network mk-newest-cni-347168: {Iface:virbr4 ExpiryTime:2023-12-06 21:15:26 +0000 UTC Type:0 Mac:52:54:00:11:9b:a6 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-347168 Clientid:01:52:54:00:11:9b:a6}
	I1206 20:15:34.779083  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined IP address 192.168.61.192 and MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:34.779187  120996 main.go:141] libmachine: (newest-cni-347168) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:9b:a6", ip: ""} in network mk-newest-cni-347168: {Iface:virbr4 ExpiryTime:2023-12-06 21:15:26 +0000 UTC Type:0 Mac:52:54:00:11:9b:a6 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-347168 Clientid:01:52:54:00:11:9b:a6}
	I1206 20:15:34.779216  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined IP address 192.168.61.192 and MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:34.779351  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHPort
	I1206 20:15:34.779479  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHPort
	I1206 20:15:34.779560  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHKeyPath
	I1206 20:15:34.779632  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHKeyPath
	I1206 20:15:34.779712  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHUsername
	I1206 20:15:34.779772  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHUsername
	I1206 20:15:34.779846  120996 sshutil.go:53] new ssh client: &{IP:192.168.61.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/newest-cni-347168/id_rsa Username:docker}
	I1206 20:15:34.779906  120996 sshutil.go:53] new ssh client: &{IP:192.168.61.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/newest-cni-347168/id_rsa Username:docker}
	I1206 20:15:34.863386  120996 ssh_runner.go:195] Run: systemctl --version
	I1206 20:15:34.895207  120996 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 20:15:35.057492  120996 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 20:15:35.064260  120996 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 20:15:35.064332  120996 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 20:15:35.080857  120996 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1206 20:15:35.080883  120996 start.go:475] detecting cgroup driver to use...
	I1206 20:15:35.080977  120996 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 20:15:35.094647  120996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 20:15:35.108721  120996 docker.go:203] disabling cri-docker service (if available) ...
	I1206 20:15:35.108805  120996 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 20:15:35.122547  120996 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 20:15:35.137628  120996 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 20:15:35.249519  120996 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 20:15:35.372591  120996 docker.go:219] disabling docker service ...
	I1206 20:15:35.372650  120996 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 20:15:35.386595  120996 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 20:15:35.399053  120996 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 20:15:35.517013  120996 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 20:15:35.630728  120996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 20:15:35.642975  120996 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 20:15:35.661406  120996 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1206 20:15:35.661494  120996 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 20:15:35.670952  120996 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 20:15:35.671028  120996 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 20:15:35.680444  120996 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 20:15:35.690123  120996 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 20:15:35.699431  120996 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 20:15:35.709773  120996 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 20:15:35.718080  120996 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1206 20:15:35.718160  120996 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1206 20:15:35.729953  120996 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 20:15:35.739791  120996 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 20:15:35.856949  120996 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 20:15:36.044563  120996 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 20:15:36.044646  120996 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 20:15:36.050663  120996 start.go:543] Will wait 60s for crictl version
	I1206 20:15:36.050727  120996 ssh_runner.go:195] Run: which crictl
	I1206 20:15:36.055266  120996 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1206 20:15:36.095529  120996 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1206 20:15:36.095602  120996 ssh_runner.go:195] Run: crio --version
	I1206 20:15:36.141633  120996 ssh_runner.go:195] Run: crio --version
	I1206 20:15:36.192165  120996 out.go:177] * Preparing Kubernetes v1.29.0-rc.1 on CRI-O 1.24.1 ...
	I1206 20:15:36.193762  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetIP
	I1206 20:15:36.197069  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:36.197489  120996 main.go:141] libmachine: (newest-cni-347168) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:9b:a6", ip: ""} in network mk-newest-cni-347168: {Iface:virbr4 ExpiryTime:2023-12-06 21:15:26 +0000 UTC Type:0 Mac:52:54:00:11:9b:a6 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-347168 Clientid:01:52:54:00:11:9b:a6}
	I1206 20:15:36.197518  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined IP address 192.168.61.192 and MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:36.197830  120996 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1206 20:15:36.202239  120996 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 20:15:36.215884  120996 localpath.go:92] copying /home/jenkins/minikube-integration/17740-63652/.minikube/client.crt -> /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/client.crt
	I1206 20:15:36.216041  120996 localpath.go:117] copying /home/jenkins/minikube-integration/17740-63652/.minikube/client.key -> /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/client.key
	I1206 20:15:36.218392  120996 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1206 20:15:36.220048  120996 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime crio
	I1206 20:15:36.220120  120996 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 20:15:36.262585  120996 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.1". assuming images are not preloaded.
	I1206 20:15:36.262652  120996 ssh_runner.go:195] Run: which lz4
	I1206 20:15:36.267061  120996 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1206 20:15:36.271359  120996 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1206 20:15:36.271388  120996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (401677649 bytes)
	I1206 20:15:37.981124  120996 crio.go:444] Took 1.714117 seconds to copy over tarball
	I1206 20:15:37.981223  120996 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1206 20:15:40.790111  120996 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.808826705s)
	I1206 20:15:40.790157  120996 crio.go:451] Took 2.809002 seconds to extract the tarball
	I1206 20:15:40.790167  120996 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1206 20:15:40.828966  120996 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 20:15:40.916896  120996 crio.go:496] all images are preloaded for cri-o runtime.
	I1206 20:15:40.916921  120996 cache_images.go:84] Images are preloaded, skipping loading
	I1206 20:15:40.916985  120996 ssh_runner.go:195] Run: crio config
	I1206 20:15:40.998264  120996 cni.go:84] Creating CNI manager for ""
	I1206 20:15:40.998288  120996 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 20:15:40.998307  120996 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I1206 20:15:40.998328  120996 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.61.192 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-347168 NodeName:newest-cni-347168 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.192"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureAr
gs:map[] NodeIP:192.168.61.192 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 20:15:40.998468  120996 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.192
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-347168"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.192
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.192"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 20:15:40.998549  120996 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-347168 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.192
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.1 ClusterName:newest-cni-347168 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1206 20:15:40.998608  120996 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.1
	I1206 20:15:41.008416  120996 binaries.go:44] Found k8s binaries, skipping transfer
	I1206 20:15:41.008501  120996 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 20:15:41.017748  120996 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (419 bytes)
	I1206 20:15:41.035185  120996 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1206 20:15:41.052224  120996 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2233 bytes)
	I1206 20:15:41.069299  120996 ssh_runner.go:195] Run: grep 192.168.61.192	control-plane.minikube.internal$ /etc/hosts
	I1206 20:15:41.073265  120996 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.192	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 20:15:41.085857  120996 certs.go:56] Setting up /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168 for IP: 192.168.61.192
	I1206 20:15:41.085896  120996 certs.go:190] acquiring lock for shared ca certs: {Name:mkf8fbf7b590617ef4dc6c3a4acb742ae26f89ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 20:15:41.086087  120996 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.key
	I1206 20:15:41.086151  120996 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.key
	I1206 20:15:41.086325  120996 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/client.key
	I1206 20:15:41.086357  120996 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/apiserver.key.8756bd21
	I1206 20:15:41.086373  120996 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/apiserver.crt.8756bd21 with IP's: [192.168.61.192 10.96.0.1 127.0.0.1 10.0.0.1]
	I1206 20:15:41.197437  120996 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/apiserver.crt.8756bd21 ...
	I1206 20:15:41.197470  120996 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/apiserver.crt.8756bd21: {Name:mkbbadf29b0d59f332c8ce9ff67c67d3ca12aa26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 20:15:41.197661  120996 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/apiserver.key.8756bd21 ...
	I1206 20:15:41.197682  120996 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/apiserver.key.8756bd21: {Name:mk4c3c03bcb2230fc8cb74c47ba0e05d48da0ed7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 20:15:41.197774  120996 certs.go:337] copying /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/apiserver.crt.8756bd21 -> /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/apiserver.crt
	I1206 20:15:41.197880  120996 certs.go:341] copying /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/apiserver.key.8756bd21 -> /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/apiserver.key
	I1206 20:15:41.197949  120996 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/proxy-client.key
	I1206 20:15:41.197971  120996 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/proxy-client.crt with IP's: []
	I1206 20:15:41.598679  120996 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/proxy-client.crt ...
	I1206 20:15:41.598710  120996 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/proxy-client.crt: {Name:mkb77a95ad0addf9acd5c9bf01b0ffc8de6e0242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 20:15:41.598874  120996 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/proxy-client.key ...
	I1206 20:15:41.598889  120996 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/proxy-client.key: {Name:mkc732ed250bbf0840017180e73efc203eba166f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 20:15:41.599055  120996 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834.pem (1338 bytes)
	W1206 20:15:41.599093  120996 certs.go:433] ignoring /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834_empty.pem, impossibly tiny 0 bytes
	I1206 20:15:41.599103  120996 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 20:15:41.599125  120996 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem (1082 bytes)
	I1206 20:15:41.599168  120996 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem (1123 bytes)
	I1206 20:15:41.599195  120996 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem (1679 bytes)
	I1206 20:15:41.599232  120996 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem (1708 bytes)
	I1206 20:15:41.599883  120996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1206 20:15:41.624812  120996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 20:15:41.650187  120996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 20:15:41.674485  120996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 20:15:41.698270  120996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 20:15:41.721020  120996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 20:15:41.745140  120996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 20:15:41.770557  120996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 20:15:41.795231  120996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834.pem --> /usr/share/ca-certificates/70834.pem (1338 bytes)
	I1206 20:15:41.821360  120996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem --> /usr/share/ca-certificates/708342.pem (1708 bytes)
	I1206 20:15:41.845544  120996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 20:15:41.869335  120996 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 20:15:41.888087  120996 ssh_runner.go:195] Run: openssl version
	I1206 20:15:41.894632  120996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/70834.pem && ln -fs /usr/share/ca-certificates/70834.pem /etc/ssl/certs/70834.pem"
	I1206 20:15:41.907245  120996 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/70834.pem
	I1206 20:15:41.912955  120996 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  6 18:50 /usr/share/ca-certificates/70834.pem
	I1206 20:15:41.913025  120996 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/70834.pem
	I1206 20:15:41.919221  120996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/70834.pem /etc/ssl/certs/51391683.0"
	I1206 20:15:41.930660  120996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/708342.pem && ln -fs /usr/share/ca-certificates/708342.pem /etc/ssl/certs/708342.pem"
	I1206 20:15:41.942151  120996 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/708342.pem
	I1206 20:15:41.946967  120996 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  6 18:50 /usr/share/ca-certificates/708342.pem
	I1206 20:15:41.947034  120996 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/708342.pem
	I1206 20:15:41.952949  120996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/708342.pem /etc/ssl/certs/3ec20f2e.0"
	I1206 20:15:41.963528  120996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1206 20:15:41.973984  120996 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 20:15:41.978597  120996 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  6 18:41 /usr/share/ca-certificates/minikubeCA.pem
	I1206 20:15:41.978663  120996 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 20:15:41.984469  120996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1206 20:15:41.995387  120996 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1206 20:15:41.999768  120996 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1206 20:15:41.999815  120996 kubeadm.go:404] StartCluster: {Name:newest-cni-347168 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.1 ClusterName:newest-cni-347168 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.192 Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jen
kins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 20:15:41.999880  120996 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 20:15:41.999947  120996 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 20:15:42.047446  120996 cri.go:89] found id: ""
	I1206 20:15:42.047529  120996 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 20:15:42.057915  120996 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 20:15:42.068059  120996 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 20:15:42.080208  120996 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 20:15:42.080260  120996 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1206 20:15:42.214896  120996 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.1
	I1206 20:15:42.214985  120996 kubeadm.go:322] [preflight] Running pre-flight checks
	I1206 20:15:42.492727  120996 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 20:15:42.492883  120996 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 20:15:42.493047  120996 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1206 20:15:42.746186  120996 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 20:15:42.761997  120996 out.go:204]   - Generating certificates and keys ...
	I1206 20:15:42.762133  120996 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1206 20:15:42.762238  120996 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1206 20:15:42.946642  120996 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 20:15:43.233781  120996 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1206 20:15:43.428093  120996 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1206 20:15:43.572927  120996 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1206 20:15:43.675521  120996 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1206 20:15:43.675955  120996 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-347168] and IPs [192.168.61.192 127.0.0.1 ::1]
	I1206 20:15:44.078655  120996 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1206 20:15:44.078879  120996 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-347168] and IPs [192.168.61.192 127.0.0.1 ::1]
	I1206 20:15:44.303828  120996 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 20:15:44.358076  120996 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 20:15:44.518551  120996 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1206 20:15:44.518878  120996 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 20:15:44.689318  120996 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 20:15:44.979567  120996 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 20:15:45.074293  120996 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 20:15:45.291683  120996 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 20:15:45.481809  120996 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 20:15:45.482648  120996 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 20:15:45.486356  120996 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 20:15:45.488443  120996 out.go:204]   - Booting up control plane ...
	I1206 20:15:45.488566  120996 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 20:15:45.488678  120996 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 20:15:45.488756  120996 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 20:15:45.508193  120996 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 20:15:45.508987  120996 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 20:15:45.509071  120996 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1206 20:15:45.651715  120996 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1206 20:15:53.654790  120996 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.005357 seconds
	I1206 20:15:53.672507  120996 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 20:15:53.686605  120996 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 20:15:54.227394  120996 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 20:15:54.227619  120996 kubeadm.go:322] [mark-control-plane] Marking the node newest-cni-347168 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1206 20:15:54.743961  120996 kubeadm.go:322] [bootstrap-token] Using token: zzfjhv.rhhjxylbr6v9obzo
	I1206 20:15:54.745695  120996 out.go:204]   - Configuring RBAC rules ...
	I1206 20:15:54.745846  120996 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 20:15:54.757514  120996 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1206 20:15:54.767939  120996 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 20:15:54.774859  120996 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 20:15:54.780189  120996 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 20:15:54.790194  120996 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 20:15:54.802105  120996 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1206 20:15:55.063098  120996 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1206 20:15:55.170001  120996 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1206 20:15:55.174699  120996 kubeadm.go:322] 
	I1206 20:15:55.174776  120996 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1206 20:15:55.174793  120996 kubeadm.go:322] 
	I1206 20:15:55.174869  120996 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1206 20:15:55.174880  120996 kubeadm.go:322] 
	I1206 20:15:55.174915  120996 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1206 20:15:55.174990  120996 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 20:15:55.175102  120996 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 20:15:55.175127  120996 kubeadm.go:322] 
	I1206 20:15:55.175224  120996 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1206 20:15:55.175237  120996 kubeadm.go:322] 
	I1206 20:15:55.175309  120996 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1206 20:15:55.175319  120996 kubeadm.go:322] 
	I1206 20:15:55.175388  120996 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1206 20:15:55.175496  120996 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 20:15:55.175614  120996 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 20:15:55.175625  120996 kubeadm.go:322] 
	I1206 20:15:55.175749  120996 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1206 20:15:55.175871  120996 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1206 20:15:55.175880  120996 kubeadm.go:322] 
	I1206 20:15:55.176008  120996 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token zzfjhv.rhhjxylbr6v9obzo \
	I1206 20:15:55.176148  120996 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3173d0205a58c67077c3594ee458dc14bc41fcece32682cbfd9ea1126e12b817 \
	I1206 20:15:55.176178  120996 kubeadm.go:322] 	--control-plane 
	I1206 20:15:55.176188  120996 kubeadm.go:322] 
	I1206 20:15:55.176289  120996 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1206 20:15:55.176301  120996 kubeadm.go:322] 
	I1206 20:15:55.176396  120996 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token zzfjhv.rhhjxylbr6v9obzo \
	I1206 20:15:55.176519  120996 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3173d0205a58c67077c3594ee458dc14bc41fcece32682cbfd9ea1126e12b817 
	I1206 20:15:55.176693  120996 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 20:15:55.176720  120996 cni.go:84] Creating CNI manager for ""
	I1206 20:15:55.176734  120996 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 20:15:55.178744  120996 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1206 20:15:55.180471  120996 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1206 20:15:55.195688  120996 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1206 20:15:55.215480  120996 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 20:15:55.215551  120996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=31a3600ce72029d920a55140bbc6d0705e357503 minikube.k8s.io/name=newest-cni-347168 minikube.k8s.io/updated_at=2023_12_06T20_15_55_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:15:55.215551  120996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:15:55.558703  120996 ops.go:34] apiserver oom_adj: -16
	I1206 20:15:55.558893  120996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:15:55.654316  120996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:15:56.238770  120996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:15:56.738289  120996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:15:57.238568  120996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:15:57.738490  120996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:15:58.238942  120996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:15:58.738647  120996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:15:59.238245  120996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:15:59.739042  120996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:16:00.238719  120996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:16:00.738914  120996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:16:01.238187  120996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:16:01.738834  120996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:16:02.238212  120996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:16:02.739060  120996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:16:03.238224  120996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:16:03.738976  120996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:16:04.238152  120996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:16:04.738477  120996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:16:05.238489  120996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:16:05.738190  120996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:16:06.238207  120996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:16:06.739054  120996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:16:07.238517  120996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:16:07.738230  120996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:16:07.846909  120996 kubeadm.go:1088] duration metric: took 12.63143328s to wait for elevateKubeSystemPrivileges.
	I1206 20:16:07.846950  120996 kubeadm.go:406] StartCluster complete in 25.847137925s
	I1206 20:16:07.846977  120996 settings.go:142] acquiring lock: {Name:mkfeb988d43ca5824ac2b3af603600358ae0dd6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 20:16:07.847064  120996 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17740-63652/kubeconfig
	I1206 20:16:07.851131  120996 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-63652/kubeconfig: {Name:mkb891a2b2c86b4a1b0f4bb8fd4e51233eb9c683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 20:16:07.851458  120996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 20:16:07.851554  120996 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1206 20:16:07.851634  120996 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-347168"
	I1206 20:16:07.851655  120996 addons.go:69] Setting default-storageclass=true in profile "newest-cni-347168"
	I1206 20:16:07.851666  120996 addons.go:231] Setting addon storage-provisioner=true in "newest-cni-347168"
	I1206 20:16:07.851684  120996 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-347168"
	I1206 20:16:07.851734  120996 host.go:66] Checking if "newest-cni-347168" exists ...
	I1206 20:16:07.851756  120996 config.go:182] Loaded profile config "newest-cni-347168": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.1
	I1206 20:16:07.852180  120996 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:16:07.852203  120996 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:16:07.852214  120996 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:16:07.852240  120996 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:16:07.872723  120996 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35705
	I1206 20:16:07.872740  120996 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40981
	I1206 20:16:07.873224  120996 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:16:07.873303  120996 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:16:07.873760  120996 main.go:141] libmachine: Using API Version  1
	I1206 20:16:07.873783  120996 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:16:07.873988  120996 main.go:141] libmachine: Using API Version  1
	I1206 20:16:07.874010  120996 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:16:07.874258  120996 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:16:07.874463  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetState
	I1206 20:16:07.875233  120996 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:16:07.875809  120996 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:16:07.875837  120996 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:16:07.878376  120996 addons.go:231] Setting addon default-storageclass=true in "newest-cni-347168"
	I1206 20:16:07.878424  120996 host.go:66] Checking if "newest-cni-347168" exists ...
	I1206 20:16:07.878853  120996 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:16:07.878882  120996 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:16:07.893412  120996 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42701
	I1206 20:16:07.894052  120996 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:16:07.894187  120996 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41183
	I1206 20:16:07.894691  120996 main.go:141] libmachine: Using API Version  1
	I1206 20:16:07.894717  120996 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:16:07.894789  120996 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:16:07.895179  120996 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:16:07.895362  120996 main.go:141] libmachine: Using API Version  1
	I1206 20:16:07.895386  120996 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:16:07.895394  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetState
	I1206 20:16:07.895761  120996 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:16:07.896546  120996 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:16:07.896586  120996 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:16:07.897295  120996 main.go:141] libmachine: (newest-cni-347168) Calling .DriverName
	I1206 20:16:07.899259  120996 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 20:16:07.900607  120996 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 20:16:07.900663  120996 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 20:16:07.900687  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHHostname
	I1206 20:16:07.904194  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:16:07.904953  120996 main.go:141] libmachine: (newest-cni-347168) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:9b:a6", ip: ""} in network mk-newest-cni-347168: {Iface:virbr4 ExpiryTime:2023-12-06 21:15:26 +0000 UTC Type:0 Mac:52:54:00:11:9b:a6 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-347168 Clientid:01:52:54:00:11:9b:a6}
	I1206 20:16:07.905057  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined IP address 192.168.61.192 and MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:16:07.905425  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHPort
	I1206 20:16:07.905674  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHKeyPath
	I1206 20:16:07.905794  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHUsername
	I1206 20:16:07.905889  120996 sshutil.go:53] new ssh client: &{IP:192.168.61.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/newest-cni-347168/id_rsa Username:docker}
	I1206 20:16:07.919638  120996 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33279
	I1206 20:16:07.920089  120996 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:16:07.920600  120996 main.go:141] libmachine: Using API Version  1
	I1206 20:16:07.920631  120996 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:16:07.921042  120996 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:16:07.921192  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetState
	I1206 20:16:07.922940  120996 main.go:141] libmachine: (newest-cni-347168) Calling .DriverName
	I1206 20:16:07.923193  120996 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 20:16:07.923214  120996 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 20:16:07.923235  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHHostname
	I1206 20:16:07.926193  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:16:07.926682  120996 main.go:141] libmachine: (newest-cni-347168) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:9b:a6", ip: ""} in network mk-newest-cni-347168: {Iface:virbr4 ExpiryTime:2023-12-06 21:15:26 +0000 UTC Type:0 Mac:52:54:00:11:9b:a6 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-347168 Clientid:01:52:54:00:11:9b:a6}
	I1206 20:16:07.926718  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined IP address 192.168.61.192 and MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:16:07.926927  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHPort
	I1206 20:16:07.927207  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHKeyPath
	I1206 20:16:07.927412  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHUsername
	I1206 20:16:07.927580  120996 sshutil.go:53] new ssh client: &{IP:192.168.61.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/newest-cni-347168/id_rsa Username:docker}
	I1206 20:16:07.931087  120996 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-347168" context rescaled to 1 replicas
	I1206 20:16:07.931148  120996 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.192 Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 20:16:07.932888  120996 out.go:177] * Verifying Kubernetes components...
	I1206 20:16:07.934253  120996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 20:16:08.053584  120996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 20:16:08.055008  120996 api_server.go:52] waiting for apiserver process to appear ...
	I1206 20:16:08.055052  120996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 20:16:08.127986  120996 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 20:16:08.143775  120996 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 20:16:08.771647  120996 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1206 20:16:08.771747  120996 api_server.go:72] duration metric: took 840.566608ms to wait for apiserver process to appear ...
	I1206 20:16:08.771775  120996 api_server.go:88] waiting for apiserver healthz status ...
	I1206 20:16:08.771796  120996 api_server.go:253] Checking apiserver healthz at https://192.168.61.192:8443/healthz ...
	I1206 20:16:08.783873  120996 api_server.go:279] https://192.168.61.192:8443/healthz returned 200:
	ok
	I1206 20:16:08.790010  120996 api_server.go:141] control plane version: v1.29.0-rc.1
	I1206 20:16:08.790048  120996 api_server.go:131] duration metric: took 18.264411ms to wait for apiserver health ...
	I1206 20:16:08.790060  120996 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 20:16:08.800649  120996 system_pods.go:59] 7 kube-system pods found
	I1206 20:16:08.800688  120996 system_pods.go:61] "coredns-76f75df574-hxfmn" [10b8ef25-a5fc-46e6-9523-eecec91a2ee7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 20:16:08.800696  120996 system_pods.go:61] "coredns-76f75df574-klm8m" [78c66a8e-d0fa-4803-8dfa-738cb9a156c2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 20:16:08.800702  120996 system_pods.go:61] "etcd-newest-cni-347168" [45388753-7f55-4b66-8f23-6534f2144977] Running
	I1206 20:16:08.800707  120996 system_pods.go:61] "kube-apiserver-newest-cni-347168" [44eda642-7ea5-487d-aa75-93c96613387c] Running
	I1206 20:16:08.800712  120996 system_pods.go:61] "kube-controller-manager-newest-cni-347168" [98a6990a-da64-405c-9fc6-2532e0c5a218] Running
	I1206 20:16:08.800718  120996 system_pods.go:61] "kube-proxy-mg5gl" [fb3398e0-2a88-4740-a4f2-38f748e01b34] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1206 20:16:08.800723  120996 system_pods.go:61] "kube-scheduler-newest-cni-347168" [dc9309ae-8fae-4d6a-9052-d36fd148f9db] Running
	I1206 20:16:08.800731  120996 system_pods.go:74] duration metric: took 10.66428ms to wait for pod list to return data ...
	I1206 20:16:08.800739  120996 default_sa.go:34] waiting for default service account to be created ...
	I1206 20:16:08.803688  120996 default_sa.go:45] found service account: "default"
	I1206 20:16:08.803710  120996 default_sa.go:55] duration metric: took 2.965556ms for default service account to be created ...
	I1206 20:16:08.803719  120996 kubeadm.go:581] duration metric: took 872.545849ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I1206 20:16:08.803737  120996 node_conditions.go:102] verifying NodePressure condition ...
	I1206 20:16:08.806698  120996 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1206 20:16:08.806726  120996 node_conditions.go:123] node cpu capacity is 2
	I1206 20:16:08.806737  120996 node_conditions.go:105] duration metric: took 2.995555ms to run NodePressure ...
	I1206 20:16:08.806748  120996 start.go:228] waiting for startup goroutines ...
	I1206 20:16:08.987941  120996 main.go:141] libmachine: Making call to close driver server
	I1206 20:16:08.987971  120996 main.go:141] libmachine: (newest-cni-347168) Calling .Close
	I1206 20:16:08.987976  120996 main.go:141] libmachine: Making call to close driver server
	I1206 20:16:08.987996  120996 main.go:141] libmachine: (newest-cni-347168) Calling .Close
	I1206 20:16:08.988280  120996 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:16:08.988315  120996 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:16:08.988334  120996 main.go:141] libmachine: Making call to close driver server
	I1206 20:16:08.988343  120996 main.go:141] libmachine: (newest-cni-347168) Calling .Close
	I1206 20:16:08.988384  120996 main.go:141] libmachine: (newest-cni-347168) DBG | Closing plugin on server side
	I1206 20:16:08.988443  120996 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:16:08.988458  120996 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:16:08.988479  120996 main.go:141] libmachine: Making call to close driver server
	I1206 20:16:08.988494  120996 main.go:141] libmachine: (newest-cni-347168) Calling .Close
	I1206 20:16:08.988585  120996 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:16:08.988587  120996 main.go:141] libmachine: (newest-cni-347168) DBG | Closing plugin on server side
	I1206 20:16:08.988598  120996 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:16:08.988774  120996 main.go:141] libmachine: (newest-cni-347168) DBG | Closing plugin on server side
	I1206 20:16:08.988800  120996 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:16:08.988810  120996 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:16:09.013122  120996 main.go:141] libmachine: Making call to close driver server
	I1206 20:16:09.013150  120996 main.go:141] libmachine: (newest-cni-347168) Calling .Close
	I1206 20:16:09.013473  120996 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:16:09.013497  120996 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:16:09.016691  120996 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1206 20:16:09.018570  120996 addons.go:502] enable addons completed in 1.167019478s: enabled=[storage-provisioner default-storageclass]
	I1206 20:16:09.018619  120996 start.go:233] waiting for cluster config update ...
	I1206 20:16:09.018674  120996 start.go:242] writing updated cluster config ...
	I1206 20:16:09.018992  120996 ssh_runner.go:195] Run: rm -f paused
	I1206 20:16:09.086122  120996 start.go:600] kubectl: 1.28.4, cluster: 1.29.0-rc.1 (minor skew: 1)
	I1206 20:16:09.088334  120996 out.go:177] * Done! kubectl is now configured to use "newest-cni-347168" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-12-06 19:55:36 UTC, ends at Wed 2023-12-06 20:16:19 UTC. --
	Dec 06 20:16:19 embed-certs-209025 crio[715]: time="2023-12-06 20:16:19.243930197Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701893779243910298,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=5139e15c-69c0-497a-97ae-1132492fd1bd name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 20:16:19 embed-certs-209025 crio[715]: time="2023-12-06 20:16:19.244886067Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f053c6e4-9141-4b61-8757-23789ad7d3ea name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:16:19 embed-certs-209025 crio[715]: time="2023-12-06 20:16:19.244964149Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f053c6e4-9141-4b61-8757-23789ad7d3ea name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:16:19 embed-certs-209025 crio[715]: time="2023-12-06 20:16:19.245203862Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ad375b57a7bfd4aeba26bc78c2535a0b637de33baa8344d21033aee93b66a963,PodSandboxId:aa8ddc84680befc4b30a234c4249bceeb52eb15e429c711e2838567689ba1a68,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701892868842368671,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2417fc35-04fd-4dcf-9d16-2649a0d3bb3b,},Annotations:map[string]string{io.kubernetes.container.hash: db237a82,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f038b2fcbbc60342831a47869f2147b4b99c785c6e227a393193e1b1f896e7e8,PodSandboxId:8ab5d5e9cbd4db30e175df17c5ab87e5bc854d12243d7346ec8571c843c23d3e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701892867941460913,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8lsns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14c5f16e-0c30-4602-b772-c6e0c8a577a8,},Annotations:map[string]string{io.kubernetes.container.hash: db547275,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba55c737b2f857dcaf6c9ad188bf4df852a56c2adba97f930243cff54ec613bf,PodSandboxId:21cbb5eeafb68d4a273894ca170c79a5e7104c4501ee4c3690eec3cc1087e7f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701892867964959305,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-57z8q,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: 24c81a49-d80e-47df-86d2-0056ccc25858,},Annotations:map[string]string{io.kubernetes.container.hash: aa1d6e99,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a3aaa502aefb75f5b5d755d86f8f904b25753fe2f35b086d51680ad1f49e319,PodSandboxId:f88e50648869ea191c725014e7d910bea76e2185b599b8650515c8de1848b687,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RU
NNING,CreatedAt:1701892865536310947,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nf2cw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e49b3f8-7eee-4c04-ae22-75ccd216bb27,},Annotations:map[string]string{io.kubernetes.container.hash: 7bf9d0af,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:279722e0476001fe1aae961b651ca615f7634985041754008e4f24944d10c082,PodSandboxId:c7ab5554d8445414b077ecb101830a0e882e70cd31ab450023fa9970a958f798,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:17018928422
48274452,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-209025,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bd71b809324a6eca42b4ebc7a97ad34,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:101928f953b6e16d024b444d21598dc8a8db9e6ab4620d3fd64d93daf34cc3d5,PodSandboxId:3f327ecd16f2f36fcac75781c6558c2970ba96313eb3dcd94908b425416f6978,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701892842507300167,Labels:map[string]str
ing{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-209025,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5e880c82b42dbac88e3f6043104b285,},Annotations:map[string]string{io.kubernetes.container.hash: 73813bbe,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa67f5071999cfc4d54edfa0e62dc8e8bca3808c5aff1ef8c3b6b160c30380f2,PodSandboxId:c6c5b9af8927b8a0af65ccf34c01f1e92567fadd2ff818a08e996b210e53ad69,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701892841943383872,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-209025,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfacf53695bcf209fb2e55d303df2a45,},Annotations:map[string]string{io.kubernetes.container.hash: b43ab966,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:845783e64fc221ad090b0cdd61ae73409ac477bbac6393a11553a57ca6cfd04e,PodSandboxId:89acf66f8001149e0cb8897ed1c54a9d123265e428eff9ab47f00e76e92ce25c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701892841761364461,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-209025,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03c61231636f1ecaceb5a6fff900bad8,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f053c6e4-9141-4b61-8757-23789ad7d3ea name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:16:19 embed-certs-209025 crio[715]: time="2023-12-06 20:16:19.288540916Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=4d0ac9ec-0641-49fe-b8ba-d2cbe65392e8 name=/runtime.v1.RuntimeService/Version
	Dec 06 20:16:19 embed-certs-209025 crio[715]: time="2023-12-06 20:16:19.288628584Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=4d0ac9ec-0641-49fe-b8ba-d2cbe65392e8 name=/runtime.v1.RuntimeService/Version
	Dec 06 20:16:19 embed-certs-209025 crio[715]: time="2023-12-06 20:16:19.289700077Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=1e332230-197f-4805-aebc-2906063e4f17 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 20:16:19 embed-certs-209025 crio[715]: time="2023-12-06 20:16:19.290174281Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701893779290159783,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=1e332230-197f-4805-aebc-2906063e4f17 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 20:16:19 embed-certs-209025 crio[715]: time="2023-12-06 20:16:19.290655318Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=46286308-27be-48c0-bf04-680d85425678 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:16:19 embed-certs-209025 crio[715]: time="2023-12-06 20:16:19.290730668Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=46286308-27be-48c0-bf04-680d85425678 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:16:19 embed-certs-209025 crio[715]: time="2023-12-06 20:16:19.290981905Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ad375b57a7bfd4aeba26bc78c2535a0b637de33baa8344d21033aee93b66a963,PodSandboxId:aa8ddc84680befc4b30a234c4249bceeb52eb15e429c711e2838567689ba1a68,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701892868842368671,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2417fc35-04fd-4dcf-9d16-2649a0d3bb3b,},Annotations:map[string]string{io.kubernetes.container.hash: db237a82,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f038b2fcbbc60342831a47869f2147b4b99c785c6e227a393193e1b1f896e7e8,PodSandboxId:8ab5d5e9cbd4db30e175df17c5ab87e5bc854d12243d7346ec8571c843c23d3e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701892867941460913,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8lsns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14c5f16e-0c30-4602-b772-c6e0c8a577a8,},Annotations:map[string]string{io.kubernetes.container.hash: db547275,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba55c737b2f857dcaf6c9ad188bf4df852a56c2adba97f930243cff54ec613bf,PodSandboxId:21cbb5eeafb68d4a273894ca170c79a5e7104c4501ee4c3690eec3cc1087e7f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701892867964959305,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-57z8q,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: 24c81a49-d80e-47df-86d2-0056ccc25858,},Annotations:map[string]string{io.kubernetes.container.hash: aa1d6e99,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a3aaa502aefb75f5b5d755d86f8f904b25753fe2f35b086d51680ad1f49e319,PodSandboxId:f88e50648869ea191c725014e7d910bea76e2185b599b8650515c8de1848b687,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RU
NNING,CreatedAt:1701892865536310947,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nf2cw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e49b3f8-7eee-4c04-ae22-75ccd216bb27,},Annotations:map[string]string{io.kubernetes.container.hash: 7bf9d0af,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:279722e0476001fe1aae961b651ca615f7634985041754008e4f24944d10c082,PodSandboxId:c7ab5554d8445414b077ecb101830a0e882e70cd31ab450023fa9970a958f798,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:17018928422
48274452,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-209025,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bd71b809324a6eca42b4ebc7a97ad34,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:101928f953b6e16d024b444d21598dc8a8db9e6ab4620d3fd64d93daf34cc3d5,PodSandboxId:3f327ecd16f2f36fcac75781c6558c2970ba96313eb3dcd94908b425416f6978,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701892842507300167,Labels:map[string]str
ing{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-209025,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5e880c82b42dbac88e3f6043104b285,},Annotations:map[string]string{io.kubernetes.container.hash: 73813bbe,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa67f5071999cfc4d54edfa0e62dc8e8bca3808c5aff1ef8c3b6b160c30380f2,PodSandboxId:c6c5b9af8927b8a0af65ccf34c01f1e92567fadd2ff818a08e996b210e53ad69,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701892841943383872,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-209025,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfacf53695bcf209fb2e55d303df2a45,},Annotations:map[string]string{io.kubernetes.container.hash: b43ab966,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:845783e64fc221ad090b0cdd61ae73409ac477bbac6393a11553a57ca6cfd04e,PodSandboxId:89acf66f8001149e0cb8897ed1c54a9d123265e428eff9ab47f00e76e92ce25c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701892841761364461,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-209025,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03c61231636f1ecaceb5a6fff900bad8,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=46286308-27be-48c0-bf04-680d85425678 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:16:19 embed-certs-209025 crio[715]: time="2023-12-06 20:16:19.331049867Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=e4588bf4-ce2e-47cd-ad4d-53468abf56f9 name=/runtime.v1.RuntimeService/Version
	Dec 06 20:16:19 embed-certs-209025 crio[715]: time="2023-12-06 20:16:19.331190981Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=e4588bf4-ce2e-47cd-ad4d-53468abf56f9 name=/runtime.v1.RuntimeService/Version
	Dec 06 20:16:19 embed-certs-209025 crio[715]: time="2023-12-06 20:16:19.333276360Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=24f6a05b-fab2-4192-b3e9-205268cfa870 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 20:16:19 embed-certs-209025 crio[715]: time="2023-12-06 20:16:19.333727346Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701893779333714030,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=24f6a05b-fab2-4192-b3e9-205268cfa870 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 20:16:19 embed-certs-209025 crio[715]: time="2023-12-06 20:16:19.334509599Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=45483a20-ac63-46b7-96cb-8dca3b6aa545 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:16:19 embed-certs-209025 crio[715]: time="2023-12-06 20:16:19.334581049Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=45483a20-ac63-46b7-96cb-8dca3b6aa545 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:16:19 embed-certs-209025 crio[715]: time="2023-12-06 20:16:19.334874702Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ad375b57a7bfd4aeba26bc78c2535a0b637de33baa8344d21033aee93b66a963,PodSandboxId:aa8ddc84680befc4b30a234c4249bceeb52eb15e429c711e2838567689ba1a68,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701892868842368671,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2417fc35-04fd-4dcf-9d16-2649a0d3bb3b,},Annotations:map[string]string{io.kubernetes.container.hash: db237a82,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f038b2fcbbc60342831a47869f2147b4b99c785c6e227a393193e1b1f896e7e8,PodSandboxId:8ab5d5e9cbd4db30e175df17c5ab87e5bc854d12243d7346ec8571c843c23d3e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701892867941460913,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8lsns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14c5f16e-0c30-4602-b772-c6e0c8a577a8,},Annotations:map[string]string{io.kubernetes.container.hash: db547275,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba55c737b2f857dcaf6c9ad188bf4df852a56c2adba97f930243cff54ec613bf,PodSandboxId:21cbb5eeafb68d4a273894ca170c79a5e7104c4501ee4c3690eec3cc1087e7f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701892867964959305,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-57z8q,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: 24c81a49-d80e-47df-86d2-0056ccc25858,},Annotations:map[string]string{io.kubernetes.container.hash: aa1d6e99,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a3aaa502aefb75f5b5d755d86f8f904b25753fe2f35b086d51680ad1f49e319,PodSandboxId:f88e50648869ea191c725014e7d910bea76e2185b599b8650515c8de1848b687,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RU
NNING,CreatedAt:1701892865536310947,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nf2cw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e49b3f8-7eee-4c04-ae22-75ccd216bb27,},Annotations:map[string]string{io.kubernetes.container.hash: 7bf9d0af,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:279722e0476001fe1aae961b651ca615f7634985041754008e4f24944d10c082,PodSandboxId:c7ab5554d8445414b077ecb101830a0e882e70cd31ab450023fa9970a958f798,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:17018928422
48274452,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-209025,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bd71b809324a6eca42b4ebc7a97ad34,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:101928f953b6e16d024b444d21598dc8a8db9e6ab4620d3fd64d93daf34cc3d5,PodSandboxId:3f327ecd16f2f36fcac75781c6558c2970ba96313eb3dcd94908b425416f6978,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701892842507300167,Labels:map[string]str
ing{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-209025,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5e880c82b42dbac88e3f6043104b285,},Annotations:map[string]string{io.kubernetes.container.hash: 73813bbe,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa67f5071999cfc4d54edfa0e62dc8e8bca3808c5aff1ef8c3b6b160c30380f2,PodSandboxId:c6c5b9af8927b8a0af65ccf34c01f1e92567fadd2ff818a08e996b210e53ad69,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701892841943383872,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-209025,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfacf53695bcf209fb2e55d303df2a45,},Annotations:map[string]string{io.kubernetes.container.hash: b43ab966,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:845783e64fc221ad090b0cdd61ae73409ac477bbac6393a11553a57ca6cfd04e,PodSandboxId:89acf66f8001149e0cb8897ed1c54a9d123265e428eff9ab47f00e76e92ce25c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701892841761364461,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-209025,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03c61231636f1ecaceb5a6fff900bad8,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=45483a20-ac63-46b7-96cb-8dca3b6aa545 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:16:19 embed-certs-209025 crio[715]: time="2023-12-06 20:16:19.371600912Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=4b719be4-db3f-4127-bccc-c5f9a31374b8 name=/runtime.v1.RuntimeService/Version
	Dec 06 20:16:19 embed-certs-209025 crio[715]: time="2023-12-06 20:16:19.371663010Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=4b719be4-db3f-4127-bccc-c5f9a31374b8 name=/runtime.v1.RuntimeService/Version
	Dec 06 20:16:19 embed-certs-209025 crio[715]: time="2023-12-06 20:16:19.372944029Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=7145d009-ccc4-4f31-b0cd-c41a383abe70 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 20:16:19 embed-certs-209025 crio[715]: time="2023-12-06 20:16:19.373339022Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701893779373323697,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=7145d009-ccc4-4f31-b0cd-c41a383abe70 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 20:16:19 embed-certs-209025 crio[715]: time="2023-12-06 20:16:19.373948640Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=fc9b4f80-bb04-43b9-9fb5-911310bc5619 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:16:19 embed-certs-209025 crio[715]: time="2023-12-06 20:16:19.374039583Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=fc9b4f80-bb04-43b9-9fb5-911310bc5619 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:16:19 embed-certs-209025 crio[715]: time="2023-12-06 20:16:19.374237432Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ad375b57a7bfd4aeba26bc78c2535a0b637de33baa8344d21033aee93b66a963,PodSandboxId:aa8ddc84680befc4b30a234c4249bceeb52eb15e429c711e2838567689ba1a68,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701892868842368671,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2417fc35-04fd-4dcf-9d16-2649a0d3bb3b,},Annotations:map[string]string{io.kubernetes.container.hash: db237a82,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f038b2fcbbc60342831a47869f2147b4b99c785c6e227a393193e1b1f896e7e8,PodSandboxId:8ab5d5e9cbd4db30e175df17c5ab87e5bc854d12243d7346ec8571c843c23d3e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701892867941460913,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8lsns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14c5f16e-0c30-4602-b772-c6e0c8a577a8,},Annotations:map[string]string{io.kubernetes.container.hash: db547275,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba55c737b2f857dcaf6c9ad188bf4df852a56c2adba97f930243cff54ec613bf,PodSandboxId:21cbb5eeafb68d4a273894ca170c79a5e7104c4501ee4c3690eec3cc1087e7f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701892867964959305,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-57z8q,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: 24c81a49-d80e-47df-86d2-0056ccc25858,},Annotations:map[string]string{io.kubernetes.container.hash: aa1d6e99,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a3aaa502aefb75f5b5d755d86f8f904b25753fe2f35b086d51680ad1f49e319,PodSandboxId:f88e50648869ea191c725014e7d910bea76e2185b599b8650515c8de1848b687,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RU
NNING,CreatedAt:1701892865536310947,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nf2cw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e49b3f8-7eee-4c04-ae22-75ccd216bb27,},Annotations:map[string]string{io.kubernetes.container.hash: 7bf9d0af,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:279722e0476001fe1aae961b651ca615f7634985041754008e4f24944d10c082,PodSandboxId:c7ab5554d8445414b077ecb101830a0e882e70cd31ab450023fa9970a958f798,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:17018928422
48274452,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-209025,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bd71b809324a6eca42b4ebc7a97ad34,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:101928f953b6e16d024b444d21598dc8a8db9e6ab4620d3fd64d93daf34cc3d5,PodSandboxId:3f327ecd16f2f36fcac75781c6558c2970ba96313eb3dcd94908b425416f6978,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701892842507300167,Labels:map[string]str
ing{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-209025,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5e880c82b42dbac88e3f6043104b285,},Annotations:map[string]string{io.kubernetes.container.hash: 73813bbe,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa67f5071999cfc4d54edfa0e62dc8e8bca3808c5aff1ef8c3b6b160c30380f2,PodSandboxId:c6c5b9af8927b8a0af65ccf34c01f1e92567fadd2ff818a08e996b210e53ad69,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701892841943383872,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-209025,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfacf53695bcf209fb2e55d303df2a45,},Annotations:map[string]string{io.kubernetes.container.hash: b43ab966,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:845783e64fc221ad090b0cdd61ae73409ac477bbac6393a11553a57ca6cfd04e,PodSandboxId:89acf66f8001149e0cb8897ed1c54a9d123265e428eff9ab47f00e76e92ce25c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701892841761364461,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-209025,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03c61231636f1ecaceb5a6fff900bad8,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=fc9b4f80-bb04-43b9-9fb5-911310bc5619 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ad375b57a7bfd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   aa8ddc84680be       storage-provisioner
	ba55c737b2f85       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   15 minutes ago      Running             coredns                   0                   21cbb5eeafb68       coredns-5dd5756b68-57z8q
	f038b2fcbbc60       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   15 minutes ago      Running             coredns                   0                   8ab5d5e9cbd4d       coredns-5dd5756b68-8lsns
	5a3aaa502aefb       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   15 minutes ago      Running             kube-proxy                0                   f88e50648869e       kube-proxy-nf2cw
	101928f953b6e       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   15 minutes ago      Running             etcd                      2                   3f327ecd16f2f       etcd-embed-certs-209025
	279722e047600       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   15 minutes ago      Running             kube-scheduler            2                   c7ab5554d8445       kube-scheduler-embed-certs-209025
	fa67f5071999c       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   15 minutes ago      Running             kube-apiserver            2                   c6c5b9af8927b       kube-apiserver-embed-certs-209025
	845783e64fc22       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   15 minutes ago      Running             kube-controller-manager   2                   89acf66f80011       kube-controller-manager-embed-certs-209025
	
	* 
	* ==> coredns [ba55c737b2f857dcaf6c9ad188bf4df852a56c2adba97f930243cff54ec613bf] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:33554 - 49889 "HINFO IN 9130365448740154584.8350522042857180029. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.027820732s
	
	* 
	* ==> coredns [f038b2fcbbc60342831a47869f2147b4b99c785c6e227a393193e1b1f896e7e8] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	[INFO] Reloading complete
	[INFO] 127.0.0.1:42945 - 9398 "HINFO IN 2600546345607168387.6013047244688371649. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.029321211s
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-209025
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-209025
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=31a3600ce72029d920a55140bbc6d0705e357503
	                    minikube.k8s.io/name=embed-certs-209025
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_06T20_00_50_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 06 Dec 2023 20:00:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-209025
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 06 Dec 2023 20:16:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 06 Dec 2023 20:11:22 +0000   Wed, 06 Dec 2023 20:00:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 06 Dec 2023 20:11:22 +0000   Wed, 06 Dec 2023 20:00:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 06 Dec 2023 20:11:22 +0000   Wed, 06 Dec 2023 20:00:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 06 Dec 2023 20:11:22 +0000   Wed, 06 Dec 2023 20:00:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.164
	  Hostname:    embed-certs-209025
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 70c42a0214ba45939561709350295f75
	  System UUID:                70c42a02-14ba-4593-9561-709350295f75
	  Boot ID:                    a907d52f-eaa3-4a92-b99d-6589e3dd4745
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-57z8q                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 coredns-5dd5756b68-8lsns                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-embed-certs-209025                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kube-apiserver-embed-certs-209025             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-embed-certs-209025    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-nf2cw                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-embed-certs-209025             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 metrics-server-57f55c9bc5-5qxxj               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node embed-certs-209025 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node embed-certs-209025 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node embed-certs-209025 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m                kubelet          Node embed-certs-209025 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m                kubelet          Node embed-certs-209025 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m                kubelet          Node embed-certs-209025 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             15m                kubelet          Node embed-certs-209025 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                15m                kubelet          Node embed-certs-209025 status is now: NodeReady
	  Normal  RegisteredNode           15m                node-controller  Node embed-certs-209025 event: Registered Node embed-certs-209025 in Controller
	
	* 
	* ==> dmesg <==
	* [Dec 6 19:55] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.070173] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.714144] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.561472] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.154930] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.444118] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.177474] systemd-fstab-generator[641]: Ignoring "noauto" for root device
	[  +0.122000] systemd-fstab-generator[652]: Ignoring "noauto" for root device
	[  +0.157167] systemd-fstab-generator[665]: Ignoring "noauto" for root device
	[  +0.130994] systemd-fstab-generator[676]: Ignoring "noauto" for root device
	[  +0.243403] systemd-fstab-generator[700]: Ignoring "noauto" for root device
	[Dec 6 19:56] systemd-fstab-generator[916]: Ignoring "noauto" for root device
	[ +18.993905] kauditd_printk_skb: 29 callbacks suppressed
	[Dec 6 20:00] systemd-fstab-generator[3504]: Ignoring "noauto" for root device
	[ +10.323945] systemd-fstab-generator[3828]: Ignoring "noauto" for root device
	[Dec 6 20:01] kauditd_printk_skb: 2 callbacks suppressed
	
	* 
	* ==> etcd [101928f953b6e16d024b444d21598dc8a8db9e6ab4620d3fd64d93daf34cc3d5] <==
	* {"level":"info","ts":"2023-12-06T20:00:44.74455Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"80a63a57d726c697 is starting a new election at term 1"}
	{"level":"info","ts":"2023-12-06T20:00:44.744634Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"80a63a57d726c697 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-12-06T20:00:44.744673Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"80a63a57d726c697 received MsgPreVoteResp from 80a63a57d726c697 at term 1"}
	{"level":"info","ts":"2023-12-06T20:00:44.744691Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"80a63a57d726c697 became candidate at term 2"}
	{"level":"info","ts":"2023-12-06T20:00:44.744701Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"80a63a57d726c697 received MsgVoteResp from 80a63a57d726c697 at term 2"}
	{"level":"info","ts":"2023-12-06T20:00:44.744712Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"80a63a57d726c697 became leader at term 2"}
	{"level":"info","ts":"2023-12-06T20:00:44.744723Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 80a63a57d726c697 elected leader 80a63a57d726c697 at term 2"}
	{"level":"info","ts":"2023-12-06T20:00:44.746881Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-06T20:00:44.748083Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"80a63a57d726c697","local-member-attributes":"{Name:embed-certs-209025 ClientURLs:[https://192.168.50.164:2379]}","request-path":"/0/members/80a63a57d726c697/attributes","cluster-id":"d41e51b80202c3fb","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-06T20:00:44.74816Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-06T20:00:44.749635Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-06T20:00:44.750227Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-06T20:00:44.760539Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.164:2379"}
	{"level":"info","ts":"2023-12-06T20:00:44.765357Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d41e51b80202c3fb","local-member-id":"80a63a57d726c697","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-06T20:00:44.765992Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-06T20:00:44.766131Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-06T20:00:44.780125Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-06T20:00:44.780314Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-06T20:10:44.784985Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":710}
	{"level":"info","ts":"2023-12-06T20:10:44.787961Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":710,"took":"2.564722ms","hash":415155373}
	{"level":"info","ts":"2023-12-06T20:10:44.788063Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":415155373,"revision":710,"compact-revision":-1}
	{"level":"info","ts":"2023-12-06T20:15:42.604372Z","caller":"traceutil/trace.go:171","msg":"trace[1293770472] transaction","detail":"{read_only:false; response_revision:1194; number_of_response:1; }","duration":"156.805491ms","start":"2023-12-06T20:15:42.447513Z","end":"2023-12-06T20:15:42.604319Z","steps":["trace[1293770472] 'process raft request'  (duration: 156.155762ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-06T20:15:44.794198Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":953}
	{"level":"info","ts":"2023-12-06T20:15:44.803556Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":953,"took":"8.265037ms","hash":3677728867}
	{"level":"info","ts":"2023-12-06T20:15:44.803689Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3677728867,"revision":953,"compact-revision":710}
	
	* 
	* ==> kernel <==
	*  20:16:19 up 20 min,  0 users,  load average: 0.60, 0.35, 0.25
	Linux embed-certs-209025 5.10.57 #1 SMP Fri Dec 1 04:24:04 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [fa67f5071999cfc4d54edfa0e62dc8e8bca3808c5aff1ef8c3b6b160c30380f2] <==
	* E1206 20:11:47.812687       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1206 20:11:47.812807       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1206 20:12:46.658473       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1206 20:13:46.658347       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1206 20:13:47.812307       1 handler_proxy.go:93] no RequestInfo found in the context
	E1206 20:13:47.812451       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1206 20:13:47.812486       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1206 20:13:47.813517       1 handler_proxy.go:93] no RequestInfo found in the context
	E1206 20:13:47.813628       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1206 20:13:47.813636       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1206 20:14:46.659078       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1206 20:15:46.658145       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1206 20:15:46.817350       1 handler_proxy.go:93] no RequestInfo found in the context
	E1206 20:15:46.817503       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1206 20:15:46.818239       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1206 20:15:47.818627       1 handler_proxy.go:93] no RequestInfo found in the context
	E1206 20:15:47.818723       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	W1206 20:15:47.818822       1 handler_proxy.go:93] no RequestInfo found in the context
	I1206 20:15:47.818827       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1206 20:15:47.818938       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1206 20:15:47.820015       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [845783e64fc221ad090b0cdd61ae73409ac477bbac6393a11553a57ca6cfd04e] <==
	* I1206 20:10:34.662387       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1206 20:11:04.238229       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1206 20:11:04.671248       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1206 20:11:34.245473       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1206 20:11:34.680827       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1206 20:12:04.254505       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1206 20:12:04.698029       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1206 20:12:16.144494       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="295.037µs"
	I1206 20:12:31.145436       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="162.069µs"
	E1206 20:12:34.260622       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1206 20:12:34.706733       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1206 20:13:04.267458       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1206 20:13:04.715628       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1206 20:13:34.274895       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1206 20:13:34.727092       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1206 20:14:04.281332       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1206 20:14:04.736455       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1206 20:14:34.287075       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1206 20:14:34.746433       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1206 20:15:04.299848       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1206 20:15:04.759224       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1206 20:15:34.308202       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1206 20:15:34.771824       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1206 20:16:04.316435       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1206 20:16:04.787610       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [5a3aaa502aefb75f5b5d755d86f8f904b25753fe2f35b086d51680ad1f49e319] <==
	* I1206 20:01:08.687741       1 server_others.go:69] "Using iptables proxy"
	I1206 20:01:08.735459       1 node.go:141] Successfully retrieved node IP: 192.168.50.164
	I1206 20:01:08.881600       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1206 20:01:08.881670       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1206 20:01:08.886354       1 server_others.go:152] "Using iptables Proxier"
	I1206 20:01:08.887812       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1206 20:01:08.888181       1 server.go:846] "Version info" version="v1.28.4"
	I1206 20:01:08.888411       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 20:01:08.890702       1 config.go:188] "Starting service config controller"
	I1206 20:01:08.898414       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1206 20:01:08.891889       1 config.go:315] "Starting node config controller"
	I1206 20:01:08.908276       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1206 20:01:08.909271       1 config.go:97] "Starting endpoint slice config controller"
	I1206 20:01:08.909392       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1206 20:01:08.998889       1 shared_informer.go:318] Caches are synced for service config
	I1206 20:01:09.008830       1 shared_informer.go:318] Caches are synced for node config
	I1206 20:01:09.009992       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [279722e0476001fe1aae961b651ca615f7634985041754008e4f24944d10c082] <==
	* W1206 20:00:47.712284       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1206 20:00:47.712374       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1206 20:00:47.713689       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1206 20:00:47.713832       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1206 20:00:47.740150       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1206 20:00:47.740204       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1206 20:00:47.767055       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1206 20:00:47.767112       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1206 20:00:47.804591       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1206 20:00:47.804658       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1206 20:00:47.819992       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1206 20:00:47.820115       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1206 20:00:47.867850       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1206 20:00:47.867971       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1206 20:00:47.903081       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1206 20:00:47.903169       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1206 20:00:47.937303       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1206 20:00:47.937422       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1206 20:00:47.981123       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1206 20:00:47.981247       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1206 20:00:48.136564       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1206 20:00:48.136633       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1206 20:00:48.143320       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1206 20:00:48.143383       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I1206 20:00:50.119029       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-12-06 19:55:36 UTC, ends at Wed 2023-12-06 20:16:19 UTC. --
	Dec 06 20:13:49 embed-certs-209025 kubelet[3835]: E1206 20:13:49.124699    3835 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-5qxxj" podUID="4eaddb4b-aec0-4cc7-b467-bb882bcba8a0"
	Dec 06 20:13:51 embed-certs-209025 kubelet[3835]: E1206 20:13:51.229538    3835 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 06 20:13:51 embed-certs-209025 kubelet[3835]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 06 20:13:51 embed-certs-209025 kubelet[3835]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 06 20:13:51 embed-certs-209025 kubelet[3835]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 06 20:14:01 embed-certs-209025 kubelet[3835]: E1206 20:14:01.125137    3835 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-5qxxj" podUID="4eaddb4b-aec0-4cc7-b467-bb882bcba8a0"
	Dec 06 20:14:15 embed-certs-209025 kubelet[3835]: E1206 20:14:15.125147    3835 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-5qxxj" podUID="4eaddb4b-aec0-4cc7-b467-bb882bcba8a0"
	Dec 06 20:14:27 embed-certs-209025 kubelet[3835]: E1206 20:14:27.124532    3835 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-5qxxj" podUID="4eaddb4b-aec0-4cc7-b467-bb882bcba8a0"
	Dec 06 20:14:38 embed-certs-209025 kubelet[3835]: E1206 20:14:38.124857    3835 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-5qxxj" podUID="4eaddb4b-aec0-4cc7-b467-bb882bcba8a0"
	Dec 06 20:14:51 embed-certs-209025 kubelet[3835]: E1206 20:14:51.231157    3835 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 06 20:14:51 embed-certs-209025 kubelet[3835]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 06 20:14:51 embed-certs-209025 kubelet[3835]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 06 20:14:51 embed-certs-209025 kubelet[3835]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 06 20:14:52 embed-certs-209025 kubelet[3835]: E1206 20:14:52.124540    3835 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-5qxxj" podUID="4eaddb4b-aec0-4cc7-b467-bb882bcba8a0"
	Dec 06 20:15:07 embed-certs-209025 kubelet[3835]: E1206 20:15:07.124932    3835 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-5qxxj" podUID="4eaddb4b-aec0-4cc7-b467-bb882bcba8a0"
	Dec 06 20:15:20 embed-certs-209025 kubelet[3835]: E1206 20:15:20.124688    3835 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-5qxxj" podUID="4eaddb4b-aec0-4cc7-b467-bb882bcba8a0"
	Dec 06 20:15:32 embed-certs-209025 kubelet[3835]: E1206 20:15:32.125262    3835 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-5qxxj" podUID="4eaddb4b-aec0-4cc7-b467-bb882bcba8a0"
	Dec 06 20:15:46 embed-certs-209025 kubelet[3835]: E1206 20:15:46.124090    3835 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-5qxxj" podUID="4eaddb4b-aec0-4cc7-b467-bb882bcba8a0"
	Dec 06 20:15:51 embed-certs-209025 kubelet[3835]: E1206 20:15:51.234293    3835 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 06 20:15:51 embed-certs-209025 kubelet[3835]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 06 20:15:51 embed-certs-209025 kubelet[3835]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 06 20:15:51 embed-certs-209025 kubelet[3835]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 06 20:15:51 embed-certs-209025 kubelet[3835]: E1206 20:15:51.328404    3835 container_manager_linux.go:514] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service"
	Dec 06 20:15:57 embed-certs-209025 kubelet[3835]: E1206 20:15:57.124953    3835 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-5qxxj" podUID="4eaddb4b-aec0-4cc7-b467-bb882bcba8a0"
	Dec 06 20:16:10 embed-certs-209025 kubelet[3835]: E1206 20:16:10.124604    3835 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-5qxxj" podUID="4eaddb4b-aec0-4cc7-b467-bb882bcba8a0"
	
	* 
	* ==> storage-provisioner [ad375b57a7bfd4aeba26bc78c2535a0b637de33baa8344d21033aee93b66a963] <==
	* I1206 20:01:09.012743       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1206 20:01:09.026150       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1206 20:01:09.026245       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1206 20:01:09.040993       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1206 20:01:09.041535       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-209025_2f34e4c0-aa5e-4e2f-8fc1-c4caadcf7692!
	I1206 20:01:09.046204       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"44c60c63-e4a2-4de1-b8dd-99775d6e768d", APIVersion:"v1", ResourceVersion:"441", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-209025_2f34e4c0-aa5e-4e2f-8fc1-c4caadcf7692 became leader
	I1206 20:01:09.143178       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-209025_2f34e4c0-aa5e-4e2f-8fc1-c4caadcf7692!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-209025 -n embed-certs-209025
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-209025 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-5qxxj
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-209025 describe pod metrics-server-57f55c9bc5-5qxxj
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-209025 describe pod metrics-server-57f55c9bc5-5qxxj: exit status 1 (64.467126ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-5qxxj" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-209025 describe pod metrics-server-57f55c9bc5-5qxxj: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (365.55s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (327.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1206 20:10:42.081128   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/flannel-459609/client.crt: no such file or directory
E1206 20:10:51.525919   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-989559 -n no-preload-989559
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-12-06 20:15:46.309494654 +0000 UTC m=+5720.393988476
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-989559 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-989559 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.302µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-989559 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-989559 -n no-preload-989559
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-989559 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-989559 logs -n 25: (1.450414122s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-459609 sudo                                  | bridge-459609                | jenkins | v1.32.0 | 06 Dec 23 19:46 UTC | 06 Dec 23 19:46 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-459609 sudo                                  | bridge-459609                | jenkins | v1.32.0 | 06 Dec 23 19:46 UTC | 06 Dec 23 19:46 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-459609 sudo find                             | bridge-459609                | jenkins | v1.32.0 | 06 Dec 23 19:46 UTC | 06 Dec 23 19:46 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-459609 sudo crio                             | bridge-459609                | jenkins | v1.32.0 | 06 Dec 23 19:46 UTC | 06 Dec 23 19:46 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-459609                                       | bridge-459609                | jenkins | v1.32.0 | 06 Dec 23 19:46 UTC | 06 Dec 23 19:46 UTC |
	| delete  | -p                                                     | disable-driver-mounts-730405 | jenkins | v1.32.0 | 06 Dec 23 19:46 UTC | 06 Dec 23 19:46 UTC |
	|         | disable-driver-mounts-730405                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-380424 | jenkins | v1.32.0 | 06 Dec 23 19:46 UTC | 06 Dec 23 19:48 UTC |
	|         | default-k8s-diff-port-380424                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-989559             | no-preload-989559            | jenkins | v1.32.0 | 06 Dec 23 19:47 UTC | 06 Dec 23 19:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-989559                                   | no-preload-989559            | jenkins | v1.32.0 | 06 Dec 23 19:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-448851        | old-k8s-version-448851       | jenkins | v1.32.0 | 06 Dec 23 19:47 UTC | 06 Dec 23 19:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-448851                              | old-k8s-version-448851       | jenkins | v1.32.0 | 06 Dec 23 19:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-380424  | default-k8s-diff-port-380424 | jenkins | v1.32.0 | 06 Dec 23 19:48 UTC | 06 Dec 23 19:48 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-380424 | jenkins | v1.32.0 | 06 Dec 23 19:48 UTC |                     |
	|         | default-k8s-diff-port-380424                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-209025            | embed-certs-209025           | jenkins | v1.32.0 | 06 Dec 23 19:48 UTC | 06 Dec 23 19:48 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-209025                                  | embed-certs-209025           | jenkins | v1.32.0 | 06 Dec 23 19:48 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-989559                  | no-preload-989559            | jenkins | v1.32.0 | 06 Dec 23 19:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-989559                                   | no-preload-989559            | jenkins | v1.32.0 | 06 Dec 23 19:50 UTC | 06 Dec 23 20:01 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.1                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-448851             | old-k8s-version-448851       | jenkins | v1.32.0 | 06 Dec 23 19:50 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-448851                              | old-k8s-version-448851       | jenkins | v1.32.0 | 06 Dec 23 19:50 UTC | 06 Dec 23 20:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-380424       | default-k8s-diff-port-380424 | jenkins | v1.32.0 | 06 Dec 23 19:50 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-209025                 | embed-certs-209025           | jenkins | v1.32.0 | 06 Dec 23 19:50 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-380424 | jenkins | v1.32.0 | 06 Dec 23 19:50 UTC | 06 Dec 23 20:00 UTC |
	|         | default-k8s-diff-port-380424                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-209025                                  | embed-certs-209025           | jenkins | v1.32.0 | 06 Dec 23 19:50 UTC | 06 Dec 23 20:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-448851                              | old-k8s-version-448851       | jenkins | v1.32.0 | 06 Dec 23 20:15 UTC | 06 Dec 23 20:15 UTC |
	| start   | -p newest-cni-347168 --memory=2200 --alsologtostderr   | newest-cni-347168            | jenkins | v1.32.0 | 06 Dec 23 20:15 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.1                      |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/06 20:15:09
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 20:15:09.805224  120996 out.go:296] Setting OutFile to fd 1 ...
	I1206 20:15:09.805509  120996 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 20:15:09.805520  120996 out.go:309] Setting ErrFile to fd 2...
	I1206 20:15:09.805524  120996 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 20:15:09.805720  120996 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17740-63652/.minikube/bin
	I1206 20:15:09.806348  120996 out.go:303] Setting JSON to false
	I1206 20:15:09.807270  120996 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":10660,"bootTime":1701883050,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 20:15:09.807333  120996 start.go:138] virtualization: kvm guest
	I1206 20:15:09.809854  120996 out.go:177] * [newest-cni-347168] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1206 20:15:09.811393  120996 out.go:177]   - MINIKUBE_LOCATION=17740
	I1206 20:15:09.812932  120996 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 20:15:09.811424  120996 notify.go:220] Checking for updates...
	I1206 20:15:09.815815  120996 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17740-63652/kubeconfig
	I1206 20:15:09.817403  120996 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17740-63652/.minikube
	I1206 20:15:09.818874  120996 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 20:15:09.820369  120996 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 20:15:09.822395  120996 config.go:182] Loaded profile config "default-k8s-diff-port-380424": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 20:15:09.822498  120996 config.go:182] Loaded profile config "embed-certs-209025": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 20:15:09.822603  120996 config.go:182] Loaded profile config "no-preload-989559": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.1
	I1206 20:15:09.822725  120996 driver.go:392] Setting default libvirt URI to qemu:///system
	I1206 20:15:09.861615  120996 out.go:177] * Using the kvm2 driver based on user configuration
	I1206 20:15:09.863332  120996 start.go:298] selected driver: kvm2
	I1206 20:15:09.863353  120996 start.go:902] validating driver "kvm2" against <nil>
	I1206 20:15:09.863380  120996 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 20:15:09.864102  120996 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 20:15:09.864195  120996 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17740-63652/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1206 20:15:09.879735  120996 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1206 20:15:09.879783  120996 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	W1206 20:15:09.879805  120996 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1206 20:15:09.880097  120996 start_flags.go:950] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1206 20:15:09.880183  120996 cni.go:84] Creating CNI manager for ""
	I1206 20:15:09.880204  120996 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 20:15:09.880226  120996 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1206 20:15:09.880242  120996 start_flags.go:323] config:
	{Name:newest-cni-347168 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.1 ClusterName:newest-cni-347168 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 20:15:09.880418  120996 iso.go:125] acquiring lock: {Name:mk6e9c7dc90243dab7d2a6f322b4b6abe4dff6ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 20:15:09.882883  120996 out.go:177] * Starting control plane node newest-cni-347168 in cluster newest-cni-347168
	I1206 20:15:09.884341  120996 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime crio
	I1206 20:15:09.884386  120996 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I1206 20:15:09.884401  120996 cache.go:56] Caching tarball of preloaded images
	I1206 20:15:09.884535  120996 preload.go:174] Found /home/jenkins/minikube-integration/17740-63652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1206 20:15:09.884549  120996 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.1 on crio
	I1206 20:15:09.884667  120996 profile.go:148] Saving config to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/config.json ...
	I1206 20:15:09.884703  120996 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/config.json: {Name:mkc51a1c7ccc2567aa83707a3b832218332d0cac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 20:15:09.884894  120996 start.go:365] acquiring machines lock for newest-cni-347168: {Name:mk49ce640266d8c664a871ed4989f65c26b6fae1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1206 20:15:09.884933  120996 start.go:369] acquired machines lock for "newest-cni-347168" in 22.74µs
	I1206 20:15:09.884956  120996 start.go:93] Provisioning new machine with config: &{Name:newest-cni-347168 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.29.0-rc.1 ClusterName:newest-cni-347168 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 20:15:09.885048  120996 start.go:125] createHost starting for "" (driver="kvm2")
	I1206 20:15:09.886939  120996 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1206 20:15:09.887110  120996 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:15:09.887163  120996 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:15:09.902685  120996 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34801
	I1206 20:15:09.903118  120996 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:15:09.903749  120996 main.go:141] libmachine: Using API Version  1
	I1206 20:15:09.903771  120996 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:15:09.904154  120996 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:15:09.904366  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetMachineName
	I1206 20:15:09.904499  120996 main.go:141] libmachine: (newest-cni-347168) Calling .DriverName
	I1206 20:15:09.904692  120996 start.go:159] libmachine.API.Create for "newest-cni-347168" (driver="kvm2")
	I1206 20:15:09.904762  120996 client.go:168] LocalClient.Create starting
	I1206 20:15:09.904828  120996 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem
	I1206 20:15:09.904862  120996 main.go:141] libmachine: Decoding PEM data...
	I1206 20:15:09.904880  120996 main.go:141] libmachine: Parsing certificate...
	I1206 20:15:09.904944  120996 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem
	I1206 20:15:09.904961  120996 main.go:141] libmachine: Decoding PEM data...
	I1206 20:15:09.904976  120996 main.go:141] libmachine: Parsing certificate...
	I1206 20:15:09.904993  120996 main.go:141] libmachine: Running pre-create checks...
	I1206 20:15:09.905007  120996 main.go:141] libmachine: (newest-cni-347168) Calling .PreCreateCheck
	I1206 20:15:09.905441  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetConfigRaw
	I1206 20:15:09.905904  120996 main.go:141] libmachine: Creating machine...
	I1206 20:15:09.905926  120996 main.go:141] libmachine: (newest-cni-347168) Calling .Create
	I1206 20:15:09.906160  120996 main.go:141] libmachine: (newest-cni-347168) Creating KVM machine...
	I1206 20:15:09.907558  120996 main.go:141] libmachine: (newest-cni-347168) DBG | found existing default KVM network
	I1206 20:15:09.908771  120996 main.go:141] libmachine: (newest-cni-347168) DBG | I1206 20:15:09.908571  121019 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:6a:15:65} reservation:<nil>}
	I1206 20:15:09.909652  120996 main.go:141] libmachine: (newest-cni-347168) DBG | I1206 20:15:09.909565  121019 network.go:214] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:d1:51:aa} reservation:<nil>}
	I1206 20:15:09.910815  120996 main.go:141] libmachine: (newest-cni-347168) DBG | I1206 20:15:09.910704  121019 network.go:209] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001fcfb0}
	I1206 20:15:09.916826  120996 main.go:141] libmachine: (newest-cni-347168) DBG | trying to create private KVM network mk-newest-cni-347168 192.168.61.0/24...
	I1206 20:15:10.001011  120996 main.go:141] libmachine: (newest-cni-347168) DBG | private KVM network mk-newest-cni-347168 192.168.61.0/24 created
	I1206 20:15:10.001053  120996 main.go:141] libmachine: (newest-cni-347168) DBG | I1206 20:15:10.000937  121019 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17740-63652/.minikube
	I1206 20:15:10.001072  120996 main.go:141] libmachine: (newest-cni-347168) Setting up store path in /home/jenkins/minikube-integration/17740-63652/.minikube/machines/newest-cni-347168 ...
	I1206 20:15:10.001125  120996 main.go:141] libmachine: (newest-cni-347168) Building disk image from file:///home/jenkins/minikube-integration/17740-63652/.minikube/cache/iso/amd64/minikube-v1.32.1-1701387192-17703-amd64.iso
	I1206 20:15:10.001177  120996 main.go:141] libmachine: (newest-cni-347168) Downloading /home/jenkins/minikube-integration/17740-63652/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17740-63652/.minikube/cache/iso/amd64/minikube-v1.32.1-1701387192-17703-amd64.iso...
	I1206 20:15:10.243016  120996 main.go:141] libmachine: (newest-cni-347168) DBG | I1206 20:15:10.242863  121019 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/newest-cni-347168/id_rsa...
	I1206 20:15:10.293758  120996 main.go:141] libmachine: (newest-cni-347168) DBG | I1206 20:15:10.293630  121019 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/newest-cni-347168/newest-cni-347168.rawdisk...
	I1206 20:15:10.293791  120996 main.go:141] libmachine: (newest-cni-347168) DBG | Writing magic tar header
	I1206 20:15:10.293805  120996 main.go:141] libmachine: (newest-cni-347168) DBG | Writing SSH key tar header
	I1206 20:15:10.293814  120996 main.go:141] libmachine: (newest-cni-347168) DBG | I1206 20:15:10.293781  121019 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17740-63652/.minikube/machines/newest-cni-347168 ...
	I1206 20:15:10.293940  120996 main.go:141] libmachine: (newest-cni-347168) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/newest-cni-347168
	I1206 20:15:10.293981  120996 main.go:141] libmachine: (newest-cni-347168) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17740-63652/.minikube/machines
	I1206 20:15:10.293999  120996 main.go:141] libmachine: (newest-cni-347168) Setting executable bit set on /home/jenkins/minikube-integration/17740-63652/.minikube/machines/newest-cni-347168 (perms=drwx------)
	I1206 20:15:10.294014  120996 main.go:141] libmachine: (newest-cni-347168) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17740-63652/.minikube
	I1206 20:15:10.294031  120996 main.go:141] libmachine: (newest-cni-347168) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17740-63652
	I1206 20:15:10.294057  120996 main.go:141] libmachine: (newest-cni-347168) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1206 20:15:10.294074  120996 main.go:141] libmachine: (newest-cni-347168) DBG | Checking permissions on dir: /home/jenkins
	I1206 20:15:10.294090  120996 main.go:141] libmachine: (newest-cni-347168) Setting executable bit set on /home/jenkins/minikube-integration/17740-63652/.minikube/machines (perms=drwxr-xr-x)
	I1206 20:15:10.294110  120996 main.go:141] libmachine: (newest-cni-347168) Setting executable bit set on /home/jenkins/minikube-integration/17740-63652/.minikube (perms=drwxr-xr-x)
	I1206 20:15:10.294124  120996 main.go:141] libmachine: (newest-cni-347168) Setting executable bit set on /home/jenkins/minikube-integration/17740-63652 (perms=drwxrwxr-x)
	I1206 20:15:10.294139  120996 main.go:141] libmachine: (newest-cni-347168) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1206 20:15:10.294151  120996 main.go:141] libmachine: (newest-cni-347168) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1206 20:15:10.294165  120996 main.go:141] libmachine: (newest-cni-347168) DBG | Checking permissions on dir: /home
	I1206 20:15:10.294177  120996 main.go:141] libmachine: (newest-cni-347168) DBG | Skipping /home - not owner
	I1206 20:15:10.294190  120996 main.go:141] libmachine: (newest-cni-347168) Creating domain...
	I1206 20:15:10.295484  120996 main.go:141] libmachine: (newest-cni-347168) define libvirt domain using xml: 
	I1206 20:15:10.295514  120996 main.go:141] libmachine: (newest-cni-347168) <domain type='kvm'>
	I1206 20:15:10.295523  120996 main.go:141] libmachine: (newest-cni-347168)   <name>newest-cni-347168</name>
	I1206 20:15:10.295529  120996 main.go:141] libmachine: (newest-cni-347168)   <memory unit='MiB'>2200</memory>
	I1206 20:15:10.295535  120996 main.go:141] libmachine: (newest-cni-347168)   <vcpu>2</vcpu>
	I1206 20:15:10.295540  120996 main.go:141] libmachine: (newest-cni-347168)   <features>
	I1206 20:15:10.295546  120996 main.go:141] libmachine: (newest-cni-347168)     <acpi/>
	I1206 20:15:10.295559  120996 main.go:141] libmachine: (newest-cni-347168)     <apic/>
	I1206 20:15:10.295581  120996 main.go:141] libmachine: (newest-cni-347168)     <pae/>
	I1206 20:15:10.295594  120996 main.go:141] libmachine: (newest-cni-347168)     
	I1206 20:15:10.295603  120996 main.go:141] libmachine: (newest-cni-347168)   </features>
	I1206 20:15:10.295610  120996 main.go:141] libmachine: (newest-cni-347168)   <cpu mode='host-passthrough'>
	I1206 20:15:10.295624  120996 main.go:141] libmachine: (newest-cni-347168)   
	I1206 20:15:10.295634  120996 main.go:141] libmachine: (newest-cni-347168)   </cpu>
	I1206 20:15:10.295666  120996 main.go:141] libmachine: (newest-cni-347168)   <os>
	I1206 20:15:10.295693  120996 main.go:141] libmachine: (newest-cni-347168)     <type>hvm</type>
	I1206 20:15:10.295705  120996 main.go:141] libmachine: (newest-cni-347168)     <boot dev='cdrom'/>
	I1206 20:15:10.295748  120996 main.go:141] libmachine: (newest-cni-347168)     <boot dev='hd'/>
	I1206 20:15:10.295764  120996 main.go:141] libmachine: (newest-cni-347168)     <bootmenu enable='no'/>
	I1206 20:15:10.295788  120996 main.go:141] libmachine: (newest-cni-347168)   </os>
	I1206 20:15:10.295801  120996 main.go:141] libmachine: (newest-cni-347168)   <devices>
	I1206 20:15:10.295815  120996 main.go:141] libmachine: (newest-cni-347168)     <disk type='file' device='cdrom'>
	I1206 20:15:10.295837  120996 main.go:141] libmachine: (newest-cni-347168)       <source file='/home/jenkins/minikube-integration/17740-63652/.minikube/machines/newest-cni-347168/boot2docker.iso'/>
	I1206 20:15:10.295848  120996 main.go:141] libmachine: (newest-cni-347168)       <target dev='hdc' bus='scsi'/>
	I1206 20:15:10.295861  120996 main.go:141] libmachine: (newest-cni-347168)       <readonly/>
	I1206 20:15:10.295872  120996 main.go:141] libmachine: (newest-cni-347168)     </disk>
	I1206 20:15:10.295886  120996 main.go:141] libmachine: (newest-cni-347168)     <disk type='file' device='disk'>
	I1206 20:15:10.295904  120996 main.go:141] libmachine: (newest-cni-347168)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1206 20:15:10.295923  120996 main.go:141] libmachine: (newest-cni-347168)       <source file='/home/jenkins/minikube-integration/17740-63652/.minikube/machines/newest-cni-347168/newest-cni-347168.rawdisk'/>
	I1206 20:15:10.295936  120996 main.go:141] libmachine: (newest-cni-347168)       <target dev='hda' bus='virtio'/>
	I1206 20:15:10.295949  120996 main.go:141] libmachine: (newest-cni-347168)     </disk>
	I1206 20:15:10.295958  120996 main.go:141] libmachine: (newest-cni-347168)     <interface type='network'>
	I1206 20:15:10.295982  120996 main.go:141] libmachine: (newest-cni-347168)       <source network='mk-newest-cni-347168'/>
	I1206 20:15:10.295999  120996 main.go:141] libmachine: (newest-cni-347168)       <model type='virtio'/>
	I1206 20:15:10.296069  120996 main.go:141] libmachine: (newest-cni-347168)     </interface>
	I1206 20:15:10.296096  120996 main.go:141] libmachine: (newest-cni-347168)     <interface type='network'>
	I1206 20:15:10.296114  120996 main.go:141] libmachine: (newest-cni-347168)       <source network='default'/>
	I1206 20:15:10.296123  120996 main.go:141] libmachine: (newest-cni-347168)       <model type='virtio'/>
	I1206 20:15:10.296133  120996 main.go:141] libmachine: (newest-cni-347168)     </interface>
	I1206 20:15:10.296142  120996 main.go:141] libmachine: (newest-cni-347168)     <serial type='pty'>
	I1206 20:15:10.296151  120996 main.go:141] libmachine: (newest-cni-347168)       <target port='0'/>
	I1206 20:15:10.296158  120996 main.go:141] libmachine: (newest-cni-347168)     </serial>
	I1206 20:15:10.296167  120996 main.go:141] libmachine: (newest-cni-347168)     <console type='pty'>
	I1206 20:15:10.296175  120996 main.go:141] libmachine: (newest-cni-347168)       <target type='serial' port='0'/>
	I1206 20:15:10.296184  120996 main.go:141] libmachine: (newest-cni-347168)     </console>
	I1206 20:15:10.296192  120996 main.go:141] libmachine: (newest-cni-347168)     <rng model='virtio'>
	I1206 20:15:10.296204  120996 main.go:141] libmachine: (newest-cni-347168)       <backend model='random'>/dev/random</backend>
	I1206 20:15:10.296211  120996 main.go:141] libmachine: (newest-cni-347168)     </rng>
	I1206 20:15:10.296220  120996 main.go:141] libmachine: (newest-cni-347168)     
	I1206 20:15:10.296234  120996 main.go:141] libmachine: (newest-cni-347168)     
	I1206 20:15:10.296244  120996 main.go:141] libmachine: (newest-cni-347168)   </devices>
	I1206 20:15:10.296252  120996 main.go:141] libmachine: (newest-cni-347168) </domain>
	I1206 20:15:10.296280  120996 main.go:141] libmachine: (newest-cni-347168) 
	I1206 20:15:10.300528  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:7f:92:13 in network default
	I1206 20:15:10.301121  120996 main.go:141] libmachine: (newest-cni-347168) Ensuring networks are active...
	I1206 20:15:10.301154  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:10.301898  120996 main.go:141] libmachine: (newest-cni-347168) Ensuring network default is active
	I1206 20:15:10.302202  120996 main.go:141] libmachine: (newest-cni-347168) Ensuring network mk-newest-cni-347168 is active
	I1206 20:15:10.302641  120996 main.go:141] libmachine: (newest-cni-347168) Getting domain xml...
	I1206 20:15:10.303450  120996 main.go:141] libmachine: (newest-cni-347168) Creating domain...
	I1206 20:15:11.631063  120996 main.go:141] libmachine: (newest-cni-347168) Waiting to get IP...
	I1206 20:15:11.631867  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:11.632488  120996 main.go:141] libmachine: (newest-cni-347168) DBG | unable to find current IP address of domain newest-cni-347168 in network mk-newest-cni-347168
	I1206 20:15:11.632520  120996 main.go:141] libmachine: (newest-cni-347168) DBG | I1206 20:15:11.632443  121019 retry.go:31] will retry after 233.957525ms: waiting for machine to come up
	I1206 20:15:11.867869  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:11.868462  120996 main.go:141] libmachine: (newest-cni-347168) DBG | unable to find current IP address of domain newest-cni-347168 in network mk-newest-cni-347168
	I1206 20:15:11.868491  120996 main.go:141] libmachine: (newest-cni-347168) DBG | I1206 20:15:11.868395  121019 retry.go:31] will retry after 255.274669ms: waiting for machine to come up
	I1206 20:15:12.124876  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:12.125472  120996 main.go:141] libmachine: (newest-cni-347168) DBG | unable to find current IP address of domain newest-cni-347168 in network mk-newest-cni-347168
	I1206 20:15:12.125503  120996 main.go:141] libmachine: (newest-cni-347168) DBG | I1206 20:15:12.125411  121019 retry.go:31] will retry after 349.317013ms: waiting for machine to come up
	I1206 20:15:12.475860  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:12.476566  120996 main.go:141] libmachine: (newest-cni-347168) DBG | unable to find current IP address of domain newest-cni-347168 in network mk-newest-cni-347168
	I1206 20:15:12.476599  120996 main.go:141] libmachine: (newest-cni-347168) DBG | I1206 20:15:12.476497  121019 retry.go:31] will retry after 416.403168ms: waiting for machine to come up
	I1206 20:15:12.894125  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:12.894686  120996 main.go:141] libmachine: (newest-cni-347168) DBG | unable to find current IP address of domain newest-cni-347168 in network mk-newest-cni-347168
	I1206 20:15:12.894709  120996 main.go:141] libmachine: (newest-cni-347168) DBG | I1206 20:15:12.894603  121019 retry.go:31] will retry after 608.573742ms: waiting for machine to come up
	I1206 20:15:13.504176  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:13.504628  120996 main.go:141] libmachine: (newest-cni-347168) DBG | unable to find current IP address of domain newest-cni-347168 in network mk-newest-cni-347168
	I1206 20:15:13.504660  120996 main.go:141] libmachine: (newest-cni-347168) DBG | I1206 20:15:13.504560  121019 retry.go:31] will retry after 646.189699ms: waiting for machine to come up
	I1206 20:15:14.152435  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:14.152802  120996 main.go:141] libmachine: (newest-cni-347168) DBG | unable to find current IP address of domain newest-cni-347168 in network mk-newest-cni-347168
	I1206 20:15:14.152825  120996 main.go:141] libmachine: (newest-cni-347168) DBG | I1206 20:15:14.152756  121019 retry.go:31] will retry after 961.404409ms: waiting for machine to come up
	I1206 20:15:15.115574  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:15.116051  120996 main.go:141] libmachine: (newest-cni-347168) DBG | unable to find current IP address of domain newest-cni-347168 in network mk-newest-cni-347168
	I1206 20:15:15.116073  120996 main.go:141] libmachine: (newest-cni-347168) DBG | I1206 20:15:15.115993  121019 retry.go:31] will retry after 1.329333828s: waiting for machine to come up
	I1206 20:15:16.447315  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:16.447883  120996 main.go:141] libmachine: (newest-cni-347168) DBG | unable to find current IP address of domain newest-cni-347168 in network mk-newest-cni-347168
	I1206 20:15:16.447925  120996 main.go:141] libmachine: (newest-cni-347168) DBG | I1206 20:15:16.447841  121019 retry.go:31] will retry after 1.448183792s: waiting for machine to come up
	I1206 20:15:17.898296  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:17.898794  120996 main.go:141] libmachine: (newest-cni-347168) DBG | unable to find current IP address of domain newest-cni-347168 in network mk-newest-cni-347168
	I1206 20:15:17.898835  120996 main.go:141] libmachine: (newest-cni-347168) DBG | I1206 20:15:17.898770  121019 retry.go:31] will retry after 1.963121871s: waiting for machine to come up
	I1206 20:15:19.863330  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:19.863874  120996 main.go:141] libmachine: (newest-cni-347168) DBG | unable to find current IP address of domain newest-cni-347168 in network mk-newest-cni-347168
	I1206 20:15:19.863907  120996 main.go:141] libmachine: (newest-cni-347168) DBG | I1206 20:15:19.863824  121019 retry.go:31] will retry after 1.863190443s: waiting for machine to come up
	I1206 20:15:21.729550  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:21.730063  120996 main.go:141] libmachine: (newest-cni-347168) DBG | unable to find current IP address of domain newest-cni-347168 in network mk-newest-cni-347168
	I1206 20:15:21.730098  120996 main.go:141] libmachine: (newest-cni-347168) DBG | I1206 20:15:21.730003  121019 retry.go:31] will retry after 3.534433438s: waiting for machine to come up
	I1206 20:15:25.266286  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:25.266770  120996 main.go:141] libmachine: (newest-cni-347168) DBG | unable to find current IP address of domain newest-cni-347168 in network mk-newest-cni-347168
	I1206 20:15:25.266793  120996 main.go:141] libmachine: (newest-cni-347168) DBG | I1206 20:15:25.266731  121019 retry.go:31] will retry after 3.268833182s: waiting for machine to come up
	I1206 20:15:28.538314  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:28.538836  120996 main.go:141] libmachine: (newest-cni-347168) DBG | unable to find current IP address of domain newest-cni-347168 in network mk-newest-cni-347168
	I1206 20:15:28.538866  120996 main.go:141] libmachine: (newest-cni-347168) DBG | I1206 20:15:28.538774  121019 retry.go:31] will retry after 4.552063341s: waiting for machine to come up
	I1206 20:15:33.094236  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:33.094859  120996 main.go:141] libmachine: (newest-cni-347168) Found IP for machine: 192.168.61.192
	I1206 20:15:33.094891  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has current primary IP address 192.168.61.192 and MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:33.094903  120996 main.go:141] libmachine: (newest-cni-347168) Reserving static IP address...
	I1206 20:15:33.095318  120996 main.go:141] libmachine: (newest-cni-347168) DBG | unable to find host DHCP lease matching {name: "newest-cni-347168", mac: "52:54:00:11:9b:a6", ip: "192.168.61.192"} in network mk-newest-cni-347168
	I1206 20:15:33.176566  120996 main.go:141] libmachine: (newest-cni-347168) DBG | Getting to WaitForSSH function...
	I1206 20:15:33.176603  120996 main.go:141] libmachine: (newest-cni-347168) Reserved static IP address: 192.168.61.192
	I1206 20:15:33.176620  120996 main.go:141] libmachine: (newest-cni-347168) Waiting for SSH to be available...
	I1206 20:15:33.179571  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:33.180101  120996 main.go:141] libmachine: (newest-cni-347168) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:9b:a6", ip: ""} in network mk-newest-cni-347168: {Iface:virbr4 ExpiryTime:2023-12-06 21:15:26 +0000 UTC Type:0 Mac:52:54:00:11:9b:a6 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:minikube Clientid:01:52:54:00:11:9b:a6}
	I1206 20:15:33.180146  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined IP address 192.168.61.192 and MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:33.180242  120996 main.go:141] libmachine: (newest-cni-347168) DBG | Using SSH client type: external
	I1206 20:15:33.180273  120996 main.go:141] libmachine: (newest-cni-347168) DBG | Using SSH private key: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/newest-cni-347168/id_rsa (-rw-------)
	I1206 20:15:33.180316  120996 main.go:141] libmachine: (newest-cni-347168) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.192 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17740-63652/.minikube/machines/newest-cni-347168/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1206 20:15:33.180335  120996 main.go:141] libmachine: (newest-cni-347168) DBG | About to run SSH command:
	I1206 20:15:33.180354  120996 main.go:141] libmachine: (newest-cni-347168) DBG | exit 0
	I1206 20:15:33.269146  120996 main.go:141] libmachine: (newest-cni-347168) DBG | SSH cmd err, output: <nil>: 
	I1206 20:15:33.269444  120996 main.go:141] libmachine: (newest-cni-347168) KVM machine creation complete!
	I1206 20:15:33.269829  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetConfigRaw
	I1206 20:15:33.270405  120996 main.go:141] libmachine: (newest-cni-347168) Calling .DriverName
	I1206 20:15:33.270633  120996 main.go:141] libmachine: (newest-cni-347168) Calling .DriverName
	I1206 20:15:33.270822  120996 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1206 20:15:33.270835  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetState
	I1206 20:15:33.272293  120996 main.go:141] libmachine: Detecting operating system of created instance...
	I1206 20:15:33.272342  120996 main.go:141] libmachine: Waiting for SSH to be available...
	I1206 20:15:33.272355  120996 main.go:141] libmachine: Getting to WaitForSSH function...
	I1206 20:15:33.272365  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHHostname
	I1206 20:15:33.275189  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:33.275639  120996 main.go:141] libmachine: (newest-cni-347168) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:9b:a6", ip: ""} in network mk-newest-cni-347168: {Iface:virbr4 ExpiryTime:2023-12-06 21:15:26 +0000 UTC Type:0 Mac:52:54:00:11:9b:a6 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-347168 Clientid:01:52:54:00:11:9b:a6}
	I1206 20:15:33.275661  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined IP address 192.168.61.192 and MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:33.275861  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHPort
	I1206 20:15:33.276078  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHKeyPath
	I1206 20:15:33.276274  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHKeyPath
	I1206 20:15:33.276436  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHUsername
	I1206 20:15:33.276619  120996 main.go:141] libmachine: Using SSH client type: native
	I1206 20:15:33.277063  120996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.192 22 <nil> <nil>}
	I1206 20:15:33.277084  120996 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1206 20:15:33.396625  120996 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1206 20:15:33.396664  120996 main.go:141] libmachine: Detecting the provisioner...
	I1206 20:15:33.396673  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHHostname
	I1206 20:15:33.399852  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:33.400190  120996 main.go:141] libmachine: (newest-cni-347168) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:9b:a6", ip: ""} in network mk-newest-cni-347168: {Iface:virbr4 ExpiryTime:2023-12-06 21:15:26 +0000 UTC Type:0 Mac:52:54:00:11:9b:a6 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-347168 Clientid:01:52:54:00:11:9b:a6}
	I1206 20:15:33.400224  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined IP address 192.168.61.192 and MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:33.400361  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHPort
	I1206 20:15:33.400593  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHKeyPath
	I1206 20:15:33.400784  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHKeyPath
	I1206 20:15:33.400971  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHUsername
	I1206 20:15:33.401166  120996 main.go:141] libmachine: Using SSH client type: native
	I1206 20:15:33.401629  120996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.192 22 <nil> <nil>}
	I1206 20:15:33.401646  120996 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1206 20:15:33.527309  120996 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gf888a99-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1206 20:15:33.527418  120996 main.go:141] libmachine: found compatible host: buildroot
	I1206 20:15:33.527427  120996 main.go:141] libmachine: Provisioning with buildroot...
	I1206 20:15:33.527434  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetMachineName
	I1206 20:15:33.527777  120996 buildroot.go:166] provisioning hostname "newest-cni-347168"
	I1206 20:15:33.527818  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetMachineName
	I1206 20:15:33.528027  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHHostname
	I1206 20:15:33.530841  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:33.531228  120996 main.go:141] libmachine: (newest-cni-347168) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:9b:a6", ip: ""} in network mk-newest-cni-347168: {Iface:virbr4 ExpiryTime:2023-12-06 21:15:26 +0000 UTC Type:0 Mac:52:54:00:11:9b:a6 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-347168 Clientid:01:52:54:00:11:9b:a6}
	I1206 20:15:33.531280  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined IP address 192.168.61.192 and MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:33.531377  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHPort
	I1206 20:15:33.531609  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHKeyPath
	I1206 20:15:33.531813  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHKeyPath
	I1206 20:15:33.532007  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHUsername
	I1206 20:15:33.532266  120996 main.go:141] libmachine: Using SSH client type: native
	I1206 20:15:33.532677  120996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.192 22 <nil> <nil>}
	I1206 20:15:33.532700  120996 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-347168 && echo "newest-cni-347168" | sudo tee /etc/hostname
	I1206 20:15:33.662449  120996 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-347168
	
	I1206 20:15:33.662483  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHHostname
	I1206 20:15:33.665436  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:33.665800  120996 main.go:141] libmachine: (newest-cni-347168) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:9b:a6", ip: ""} in network mk-newest-cni-347168: {Iface:virbr4 ExpiryTime:2023-12-06 21:15:26 +0000 UTC Type:0 Mac:52:54:00:11:9b:a6 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-347168 Clientid:01:52:54:00:11:9b:a6}
	I1206 20:15:33.665846  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined IP address 192.168.61.192 and MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:33.665981  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHPort
	I1206 20:15:33.666218  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHKeyPath
	I1206 20:15:33.666403  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHKeyPath
	I1206 20:15:33.666527  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHUsername
	I1206 20:15:33.666696  120996 main.go:141] libmachine: Using SSH client type: native
	I1206 20:15:33.667172  120996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.192 22 <nil> <nil>}
	I1206 20:15:33.667192  120996 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-347168' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-347168/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-347168' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 20:15:33.796492  120996 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1206 20:15:33.796531  120996 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17740-63652/.minikube CaCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17740-63652/.minikube}
	I1206 20:15:33.796567  120996 buildroot.go:174] setting up certificates
	I1206 20:15:33.796589  120996 provision.go:83] configureAuth start
	I1206 20:15:33.796604  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetMachineName
	I1206 20:15:33.796964  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetIP
	I1206 20:15:33.799993  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:33.800370  120996 main.go:141] libmachine: (newest-cni-347168) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:9b:a6", ip: ""} in network mk-newest-cni-347168: {Iface:virbr4 ExpiryTime:2023-12-06 21:15:26 +0000 UTC Type:0 Mac:52:54:00:11:9b:a6 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-347168 Clientid:01:52:54:00:11:9b:a6}
	I1206 20:15:33.800403  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined IP address 192.168.61.192 and MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:33.800521  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHHostname
	I1206 20:15:33.802989  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:33.803300  120996 main.go:141] libmachine: (newest-cni-347168) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:9b:a6", ip: ""} in network mk-newest-cni-347168: {Iface:virbr4 ExpiryTime:2023-12-06 21:15:26 +0000 UTC Type:0 Mac:52:54:00:11:9b:a6 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-347168 Clientid:01:52:54:00:11:9b:a6}
	I1206 20:15:33.803341  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined IP address 192.168.61.192 and MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:33.803478  120996 provision.go:138] copyHostCerts
	I1206 20:15:33.803571  120996 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem, removing ...
	I1206 20:15:33.803603  120996 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem
	I1206 20:15:33.803687  120996 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem (1082 bytes)
	I1206 20:15:33.803858  120996 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem, removing ...
	I1206 20:15:33.803869  120996 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem
	I1206 20:15:33.803910  120996 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem (1123 bytes)
	I1206 20:15:33.804042  120996 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem, removing ...
	I1206 20:15:33.804091  120996 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem
	I1206 20:15:33.804141  120996 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem (1679 bytes)
	I1206 20:15:33.804214  120996 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem org=jenkins.newest-cni-347168 san=[192.168.61.192 192.168.61.192 localhost 127.0.0.1 minikube newest-cni-347168]
	I1206 20:15:33.994563  120996 provision.go:172] copyRemoteCerts
	I1206 20:15:33.994644  120996 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 20:15:33.994682  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHHostname
	I1206 20:15:33.997818  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:33.998118  120996 main.go:141] libmachine: (newest-cni-347168) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:9b:a6", ip: ""} in network mk-newest-cni-347168: {Iface:virbr4 ExpiryTime:2023-12-06 21:15:26 +0000 UTC Type:0 Mac:52:54:00:11:9b:a6 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-347168 Clientid:01:52:54:00:11:9b:a6}
	I1206 20:15:33.998153  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined IP address 192.168.61.192 and MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:33.998411  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHPort
	I1206 20:15:33.998612  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHKeyPath
	I1206 20:15:33.998774  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHUsername
	I1206 20:15:33.998935  120996 sshutil.go:53] new ssh client: &{IP:192.168.61.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/newest-cni-347168/id_rsa Username:docker}
	I1206 20:15:34.091615  120996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 20:15:34.118438  120996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1206 20:15:34.145084  120996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 20:15:34.170898  120996 provision.go:86] duration metric: configureAuth took 374.286079ms
	I1206 20:15:34.170929  120996 buildroot.go:189] setting minikube options for container-runtime
	I1206 20:15:34.171164  120996 config.go:182] Loaded profile config "newest-cni-347168": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.1
	I1206 20:15:34.171268  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHHostname
	I1206 20:15:34.174189  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:34.174600  120996 main.go:141] libmachine: (newest-cni-347168) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:9b:a6", ip: ""} in network mk-newest-cni-347168: {Iface:virbr4 ExpiryTime:2023-12-06 21:15:26 +0000 UTC Type:0 Mac:52:54:00:11:9b:a6 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-347168 Clientid:01:52:54:00:11:9b:a6}
	I1206 20:15:34.174628  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined IP address 192.168.61.192 and MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:34.174785  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHPort
	I1206 20:15:34.174985  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHKeyPath
	I1206 20:15:34.175141  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHKeyPath
	I1206 20:15:34.175338  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHUsername
	I1206 20:15:34.175523  120996 main.go:141] libmachine: Using SSH client type: native
	I1206 20:15:34.175843  120996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.192 22 <nil> <nil>}
	I1206 20:15:34.175862  120996 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 20:15:34.505869  120996 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 20:15:34.505897  120996 main.go:141] libmachine: Checking connection to Docker...
	I1206 20:15:34.505925  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetURL
	I1206 20:15:34.507244  120996 main.go:141] libmachine: (newest-cni-347168) DBG | Using libvirt version 6000000
	I1206 20:15:34.509869  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:34.510193  120996 main.go:141] libmachine: (newest-cni-347168) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:9b:a6", ip: ""} in network mk-newest-cni-347168: {Iface:virbr4 ExpiryTime:2023-12-06 21:15:26 +0000 UTC Type:0 Mac:52:54:00:11:9b:a6 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-347168 Clientid:01:52:54:00:11:9b:a6}
	I1206 20:15:34.510223  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined IP address 192.168.61.192 and MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:34.510381  120996 main.go:141] libmachine: Docker is up and running!
	I1206 20:15:34.510395  120996 main.go:141] libmachine: Reticulating splines...
	I1206 20:15:34.510402  120996 client.go:171] LocalClient.Create took 24.605627718s
	I1206 20:15:34.510422  120996 start.go:167] duration metric: libmachine.API.Create for "newest-cni-347168" took 24.605732185s
	I1206 20:15:34.510431  120996 start.go:300] post-start starting for "newest-cni-347168" (driver="kvm2")
	I1206 20:15:34.510441  120996 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 20:15:34.510457  120996 main.go:141] libmachine: (newest-cni-347168) Calling .DriverName
	I1206 20:15:34.510730  120996 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 20:15:34.510761  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHHostname
	I1206 20:15:34.512910  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:34.513206  120996 main.go:141] libmachine: (newest-cni-347168) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:9b:a6", ip: ""} in network mk-newest-cni-347168: {Iface:virbr4 ExpiryTime:2023-12-06 21:15:26 +0000 UTC Type:0 Mac:52:54:00:11:9b:a6 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-347168 Clientid:01:52:54:00:11:9b:a6}
	I1206 20:15:34.513248  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined IP address 192.168.61.192 and MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:34.513417  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHPort
	I1206 20:15:34.513618  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHKeyPath
	I1206 20:15:34.513799  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHUsername
	I1206 20:15:34.513964  120996 sshutil.go:53] new ssh client: &{IP:192.168.61.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/newest-cni-347168/id_rsa Username:docker}
	I1206 20:15:34.602772  120996 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 20:15:34.607707  120996 info.go:137] Remote host: Buildroot 2021.02.12
	I1206 20:15:34.607747  120996 filesync.go:126] Scanning /home/jenkins/minikube-integration/17740-63652/.minikube/addons for local assets ...
	I1206 20:15:34.607827  120996 filesync.go:126] Scanning /home/jenkins/minikube-integration/17740-63652/.minikube/files for local assets ...
	I1206 20:15:34.607921  120996 filesync.go:149] local asset: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem -> 708342.pem in /etc/ssl/certs
	I1206 20:15:34.608034  120996 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 20:15:34.617266  120996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem --> /etc/ssl/certs/708342.pem (1708 bytes)
	I1206 20:15:34.642598  120996 start.go:303] post-start completed in 132.153683ms
	I1206 20:15:34.642655  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetConfigRaw
	I1206 20:15:34.643248  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetIP
	I1206 20:15:34.645908  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:34.646216  120996 main.go:141] libmachine: (newest-cni-347168) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:9b:a6", ip: ""} in network mk-newest-cni-347168: {Iface:virbr4 ExpiryTime:2023-12-06 21:15:26 +0000 UTC Type:0 Mac:52:54:00:11:9b:a6 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-347168 Clientid:01:52:54:00:11:9b:a6}
	I1206 20:15:34.646250  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined IP address 192.168.61.192 and MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:34.646495  120996 profile.go:148] Saving config to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/config.json ...
	I1206 20:15:34.646667  120996 start.go:128] duration metric: createHost completed in 24.7616076s
	I1206 20:15:34.646690  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHHostname
	I1206 20:15:34.649005  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:34.649396  120996 main.go:141] libmachine: (newest-cni-347168) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:9b:a6", ip: ""} in network mk-newest-cni-347168: {Iface:virbr4 ExpiryTime:2023-12-06 21:15:26 +0000 UTC Type:0 Mac:52:54:00:11:9b:a6 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-347168 Clientid:01:52:54:00:11:9b:a6}
	I1206 20:15:34.649427  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined IP address 192.168.61.192 and MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:34.649582  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHPort
	I1206 20:15:34.649793  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHKeyPath
	I1206 20:15:34.649962  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHKeyPath
	I1206 20:15:34.650115  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHUsername
	I1206 20:15:34.650296  120996 main.go:141] libmachine: Using SSH client type: native
	I1206 20:15:34.650651  120996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.192 22 <nil> <nil>}
	I1206 20:15:34.650665  120996 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1206 20:15:34.770239  120996 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701893734.748854790
	
	I1206 20:15:34.770269  120996 fix.go:206] guest clock: 1701893734.748854790
	I1206 20:15:34.770279  120996 fix.go:219] Guest: 2023-12-06 20:15:34.74885479 +0000 UTC Remote: 2023-12-06 20:15:34.646679476 +0000 UTC m=+24.893998228 (delta=102.175314ms)
	I1206 20:15:34.770307  120996 fix.go:190] guest clock delta is within tolerance: 102.175314ms
	I1206 20:15:34.770313  120996 start.go:83] releasing machines lock for "newest-cni-347168", held for 24.885371157s
	I1206 20:15:34.770338  120996 main.go:141] libmachine: (newest-cni-347168) Calling .DriverName
	I1206 20:15:34.770693  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetIP
	I1206 20:15:34.773617  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:34.774159  120996 main.go:141] libmachine: (newest-cni-347168) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:9b:a6", ip: ""} in network mk-newest-cni-347168: {Iface:virbr4 ExpiryTime:2023-12-06 21:15:26 +0000 UTC Type:0 Mac:52:54:00:11:9b:a6 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-347168 Clientid:01:52:54:00:11:9b:a6}
	I1206 20:15:34.774191  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined IP address 192.168.61.192 and MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:34.774423  120996 main.go:141] libmachine: (newest-cni-347168) Calling .DriverName
	I1206 20:15:34.775037  120996 main.go:141] libmachine: (newest-cni-347168) Calling .DriverName
	I1206 20:15:34.775241  120996 main.go:141] libmachine: (newest-cni-347168) Calling .DriverName
	I1206 20:15:34.775404  120996 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 20:15:34.775472  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHHostname
	I1206 20:15:34.775508  120996 ssh_runner.go:195] Run: cat /version.json
	I1206 20:15:34.775536  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHHostname
	I1206 20:15:34.778593  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:34.778852  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:34.779035  120996 main.go:141] libmachine: (newest-cni-347168) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:9b:a6", ip: ""} in network mk-newest-cni-347168: {Iface:virbr4 ExpiryTime:2023-12-06 21:15:26 +0000 UTC Type:0 Mac:52:54:00:11:9b:a6 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-347168 Clientid:01:52:54:00:11:9b:a6}
	I1206 20:15:34.779083  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined IP address 192.168.61.192 and MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:34.779187  120996 main.go:141] libmachine: (newest-cni-347168) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:9b:a6", ip: ""} in network mk-newest-cni-347168: {Iface:virbr4 ExpiryTime:2023-12-06 21:15:26 +0000 UTC Type:0 Mac:52:54:00:11:9b:a6 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-347168 Clientid:01:52:54:00:11:9b:a6}
	I1206 20:15:34.779216  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined IP address 192.168.61.192 and MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:34.779351  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHPort
	I1206 20:15:34.779479  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHPort
	I1206 20:15:34.779560  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHKeyPath
	I1206 20:15:34.779632  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHKeyPath
	I1206 20:15:34.779712  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHUsername
	I1206 20:15:34.779772  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetSSHUsername
	I1206 20:15:34.779846  120996 sshutil.go:53] new ssh client: &{IP:192.168.61.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/newest-cni-347168/id_rsa Username:docker}
	I1206 20:15:34.779906  120996 sshutil.go:53] new ssh client: &{IP:192.168.61.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/newest-cni-347168/id_rsa Username:docker}
	I1206 20:15:34.863386  120996 ssh_runner.go:195] Run: systemctl --version
	I1206 20:15:34.895207  120996 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 20:15:35.057492  120996 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 20:15:35.064260  120996 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 20:15:35.064332  120996 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 20:15:35.080857  120996 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1206 20:15:35.080883  120996 start.go:475] detecting cgroup driver to use...
	I1206 20:15:35.080977  120996 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 20:15:35.094647  120996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 20:15:35.108721  120996 docker.go:203] disabling cri-docker service (if available) ...
	I1206 20:15:35.108805  120996 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 20:15:35.122547  120996 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 20:15:35.137628  120996 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 20:15:35.249519  120996 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 20:15:35.372591  120996 docker.go:219] disabling docker service ...
	I1206 20:15:35.372650  120996 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 20:15:35.386595  120996 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 20:15:35.399053  120996 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 20:15:35.517013  120996 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 20:15:35.630728  120996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 20:15:35.642975  120996 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 20:15:35.661406  120996 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1206 20:15:35.661494  120996 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 20:15:35.670952  120996 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 20:15:35.671028  120996 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 20:15:35.680444  120996 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 20:15:35.690123  120996 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 20:15:35.699431  120996 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 20:15:35.709773  120996 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 20:15:35.718080  120996 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1206 20:15:35.718160  120996 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1206 20:15:35.729953  120996 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 20:15:35.739791  120996 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 20:15:35.856949  120996 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 20:15:36.044563  120996 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 20:15:36.044646  120996 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 20:15:36.050663  120996 start.go:543] Will wait 60s for crictl version
	I1206 20:15:36.050727  120996 ssh_runner.go:195] Run: which crictl
	I1206 20:15:36.055266  120996 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1206 20:15:36.095529  120996 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1206 20:15:36.095602  120996 ssh_runner.go:195] Run: crio --version
	I1206 20:15:36.141633  120996 ssh_runner.go:195] Run: crio --version
	I1206 20:15:36.192165  120996 out.go:177] * Preparing Kubernetes v1.29.0-rc.1 on CRI-O 1.24.1 ...
	I1206 20:15:36.193762  120996 main.go:141] libmachine: (newest-cni-347168) Calling .GetIP
	I1206 20:15:36.197069  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:36.197489  120996 main.go:141] libmachine: (newest-cni-347168) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:9b:a6", ip: ""} in network mk-newest-cni-347168: {Iface:virbr4 ExpiryTime:2023-12-06 21:15:26 +0000 UTC Type:0 Mac:52:54:00:11:9b:a6 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-347168 Clientid:01:52:54:00:11:9b:a6}
	I1206 20:15:36.197518  120996 main.go:141] libmachine: (newest-cni-347168) DBG | domain newest-cni-347168 has defined IP address 192.168.61.192 and MAC address 52:54:00:11:9b:a6 in network mk-newest-cni-347168
	I1206 20:15:36.197830  120996 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1206 20:15:36.202239  120996 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 20:15:36.215884  120996 localpath.go:92] copying /home/jenkins/minikube-integration/17740-63652/.minikube/client.crt -> /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/client.crt
	I1206 20:15:36.216041  120996 localpath.go:117] copying /home/jenkins/minikube-integration/17740-63652/.minikube/client.key -> /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/client.key
	I1206 20:15:36.218392  120996 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1206 20:15:36.220048  120996 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime crio
	I1206 20:15:36.220120  120996 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 20:15:36.262585  120996 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.1". assuming images are not preloaded.
	I1206 20:15:36.262652  120996 ssh_runner.go:195] Run: which lz4
	I1206 20:15:36.267061  120996 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1206 20:15:36.271359  120996 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1206 20:15:36.271388  120996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (401677649 bytes)
	I1206 20:15:37.981124  120996 crio.go:444] Took 1.714117 seconds to copy over tarball
	I1206 20:15:37.981223  120996 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1206 20:15:40.790111  120996 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.808826705s)
	I1206 20:15:40.790157  120996 crio.go:451] Took 2.809002 seconds to extract the tarball
	I1206 20:15:40.790167  120996 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1206 20:15:40.828966  120996 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 20:15:40.916896  120996 crio.go:496] all images are preloaded for cri-o runtime.
	I1206 20:15:40.916921  120996 cache_images.go:84] Images are preloaded, skipping loading
	I1206 20:15:40.916985  120996 ssh_runner.go:195] Run: crio config
	I1206 20:15:40.998264  120996 cni.go:84] Creating CNI manager for ""
	I1206 20:15:40.998288  120996 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 20:15:40.998307  120996 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I1206 20:15:40.998328  120996 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.61.192 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-347168 NodeName:newest-cni-347168 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.192"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureAr
gs:map[] NodeIP:192.168.61.192 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 20:15:40.998468  120996 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.192
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-347168"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.192
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.192"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 20:15:40.998549  120996 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-347168 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.192
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.1 ClusterName:newest-cni-347168 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1206 20:15:40.998608  120996 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.1
	I1206 20:15:41.008416  120996 binaries.go:44] Found k8s binaries, skipping transfer
	I1206 20:15:41.008501  120996 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 20:15:41.017748  120996 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (419 bytes)
	I1206 20:15:41.035185  120996 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1206 20:15:41.052224  120996 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2233 bytes)
	I1206 20:15:41.069299  120996 ssh_runner.go:195] Run: grep 192.168.61.192	control-plane.minikube.internal$ /etc/hosts
	I1206 20:15:41.073265  120996 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.192	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 20:15:41.085857  120996 certs.go:56] Setting up /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168 for IP: 192.168.61.192
	I1206 20:15:41.085896  120996 certs.go:190] acquiring lock for shared ca certs: {Name:mkf8fbf7b590617ef4dc6c3a4acb742ae26f89ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 20:15:41.086087  120996 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.key
	I1206 20:15:41.086151  120996 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.key
	I1206 20:15:41.086325  120996 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/client.key
	I1206 20:15:41.086357  120996 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/apiserver.key.8756bd21
	I1206 20:15:41.086373  120996 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/apiserver.crt.8756bd21 with IP's: [192.168.61.192 10.96.0.1 127.0.0.1 10.0.0.1]
	I1206 20:15:41.197437  120996 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/apiserver.crt.8756bd21 ...
	I1206 20:15:41.197470  120996 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/apiserver.crt.8756bd21: {Name:mkbbadf29b0d59f332c8ce9ff67c67d3ca12aa26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 20:15:41.197661  120996 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/apiserver.key.8756bd21 ...
	I1206 20:15:41.197682  120996 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/apiserver.key.8756bd21: {Name:mk4c3c03bcb2230fc8cb74c47ba0e05d48da0ed7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 20:15:41.197774  120996 certs.go:337] copying /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/apiserver.crt.8756bd21 -> /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/apiserver.crt
	I1206 20:15:41.197880  120996 certs.go:341] copying /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/apiserver.key.8756bd21 -> /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/apiserver.key
	I1206 20:15:41.197949  120996 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/proxy-client.key
	I1206 20:15:41.197971  120996 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/proxy-client.crt with IP's: []
	I1206 20:15:41.598679  120996 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/proxy-client.crt ...
	I1206 20:15:41.598710  120996 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/proxy-client.crt: {Name:mkb77a95ad0addf9acd5c9bf01b0ffc8de6e0242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 20:15:41.598874  120996 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/proxy-client.key ...
	I1206 20:15:41.598889  120996 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/proxy-client.key: {Name:mkc732ed250bbf0840017180e73efc203eba166f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 20:15:41.599055  120996 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834.pem (1338 bytes)
	W1206 20:15:41.599093  120996 certs.go:433] ignoring /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834_empty.pem, impossibly tiny 0 bytes
	I1206 20:15:41.599103  120996 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 20:15:41.599125  120996 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem (1082 bytes)
	I1206 20:15:41.599168  120996 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem (1123 bytes)
	I1206 20:15:41.599195  120996 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem (1679 bytes)
	I1206 20:15:41.599232  120996 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem (1708 bytes)
	I1206 20:15:41.599883  120996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1206 20:15:41.624812  120996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 20:15:41.650187  120996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 20:15:41.674485  120996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 20:15:41.698270  120996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 20:15:41.721020  120996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 20:15:41.745140  120996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 20:15:41.770557  120996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 20:15:41.795231  120996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834.pem --> /usr/share/ca-certificates/70834.pem (1338 bytes)
	I1206 20:15:41.821360  120996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem --> /usr/share/ca-certificates/708342.pem (1708 bytes)
	I1206 20:15:41.845544  120996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 20:15:41.869335  120996 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 20:15:41.888087  120996 ssh_runner.go:195] Run: openssl version
	I1206 20:15:41.894632  120996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/70834.pem && ln -fs /usr/share/ca-certificates/70834.pem /etc/ssl/certs/70834.pem"
	I1206 20:15:41.907245  120996 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/70834.pem
	I1206 20:15:41.912955  120996 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  6 18:50 /usr/share/ca-certificates/70834.pem
	I1206 20:15:41.913025  120996 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/70834.pem
	I1206 20:15:41.919221  120996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/70834.pem /etc/ssl/certs/51391683.0"
	I1206 20:15:41.930660  120996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/708342.pem && ln -fs /usr/share/ca-certificates/708342.pem /etc/ssl/certs/708342.pem"
	I1206 20:15:41.942151  120996 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/708342.pem
	I1206 20:15:41.946967  120996 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  6 18:50 /usr/share/ca-certificates/708342.pem
	I1206 20:15:41.947034  120996 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/708342.pem
	I1206 20:15:41.952949  120996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/708342.pem /etc/ssl/certs/3ec20f2e.0"
	I1206 20:15:41.963528  120996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1206 20:15:41.973984  120996 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 20:15:41.978597  120996 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  6 18:41 /usr/share/ca-certificates/minikubeCA.pem
	I1206 20:15:41.978663  120996 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 20:15:41.984469  120996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1206 20:15:41.995387  120996 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1206 20:15:41.999768  120996 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1206 20:15:41.999815  120996 kubeadm.go:404] StartCluster: {Name:newest-cni-347168 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.1 ClusterName:newest-cni-347168 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.192 Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jen
kins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 20:15:41.999880  120996 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 20:15:41.999947  120996 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 20:15:42.047446  120996 cri.go:89] found id: ""
	I1206 20:15:42.047529  120996 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 20:15:42.057915  120996 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 20:15:42.068059  120996 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 20:15:42.080208  120996 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 20:15:42.080260  120996 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1206 20:15:42.214896  120996 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.1
	I1206 20:15:42.214985  120996 kubeadm.go:322] [preflight] Running pre-flight checks
	I1206 20:15:42.492727  120996 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 20:15:42.492883  120996 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 20:15:42.493047  120996 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1206 20:15:42.746186  120996 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 20:15:42.761997  120996 out.go:204]   - Generating certificates and keys ...
	I1206 20:15:42.762133  120996 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1206 20:15:42.762238  120996 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1206 20:15:42.946642  120996 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 20:15:43.233781  120996 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1206 20:15:43.428093  120996 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1206 20:15:43.572927  120996 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1206 20:15:43.675521  120996 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1206 20:15:43.675955  120996 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-347168] and IPs [192.168.61.192 127.0.0.1 ::1]
	I1206 20:15:44.078655  120996 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1206 20:15:44.078879  120996 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-347168] and IPs [192.168.61.192 127.0.0.1 ::1]
	I1206 20:15:44.303828  120996 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 20:15:44.358076  120996 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 20:15:44.518551  120996 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1206 20:15:44.518878  120996 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 20:15:44.689318  120996 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 20:15:44.979567  120996 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 20:15:45.074293  120996 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 20:15:45.291683  120996 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 20:15:45.481809  120996 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 20:15:45.482648  120996 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 20:15:45.486356  120996 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-12-06 19:55:59 UTC, ends at Wed 2023-12-06 20:15:47 UTC. --
	Dec 06 20:15:47 no-preload-989559 crio[721]: time="2023-12-06 20:15:47.160479346Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=895cc4bc-19b4-4361-b530-e95a4bb95d3c name=/runtime.v1.RuntimeService/Version
	Dec 06 20:15:47 no-preload-989559 crio[721]: time="2023-12-06 20:15:47.161730548Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=47a8f3f9-0c29-48a3-b8cd-6e3dc8f15e42 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 20:15:47 no-preload-989559 crio[721]: time="2023-12-06 20:15:47.162482206Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701893747162465592,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=47a8f3f9-0c29-48a3-b8cd-6e3dc8f15e42 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 20:15:47 no-preload-989559 crio[721]: time="2023-12-06 20:15:47.163684707Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=43faabfa-341a-422e-914e-5641260090ed name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:15:47 no-preload-989559 crio[721]: time="2023-12-06 20:15:47.163763316Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=43faabfa-341a-422e-914e-5641260090ed name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:15:47 no-preload-989559 crio[721]: time="2023-12-06 20:15:47.163987147Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ec1601a49c79cb7e81473ecbb6e4506b82fc3b4942e91392653a6991579c5617,PodSandboxId:738e0ea3813b5b038dd2a87efd2e463314ae90b6ce68e5d74d84d91467982f23,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1701892643167217828,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4d98de3-12ec-47f6-a6a6-f1dc61b479be,},Annotations:map[string]string{io.kubernetes.container.hash: 92a2a5c5,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93aee471c37fc8008cd5a0ce741944f249d9cbf7a5cbb62828369850236b0f07,PodSandboxId:86500c7e690bbb411c1e6705acf9be22226888d75f882e4ae7aa0dc6481fcc6f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1701892619226302414,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-h9pkz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05501356-bf9b-4a99-a1b9-40d0caef38db,},Annotations:map[string]string{io.kubernetes.container.hash: dd425747,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2037a52e07f01097679be27c7ee8e697c886fba15f6934055f4e1af533cddb9,PodSandboxId:9bc1deb7b22d52a7ead9d48f921a87078689fa8c0d33f296602853cd62297483,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1701892616236142910,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: defaul
t,io.kubernetes.pod.uid: 73861515-9ff9-459b-888d-b551bd3eac06,},Annotations:map[string]string{io.kubernetes.container.hash: ae530940,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d07b3a050ef1964c57b85cc91a3d17646fef6d2298b91d72c007f432c51942c9,PodSandboxId:738e0ea3813b5b038dd2a87efd2e463314ae90b6ce68e5d74d84d91467982f23,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1701892612133161111,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: c4d98de3-12ec-47f6-a6a6-f1dc61b479be,},Annotations:map[string]string{io.kubernetes.container.hash: 92a2a5c5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0da9ad5d9749cba59bd11e3aa5dd332ad76b5bdd0fcde476ce333d4069f0e259,PodSandboxId:c023cca4e4bfd31ae00a2633d0a3ff041d33389bff1d668362a40abfe0eac11c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:86e0ea23640eb90027102f4e4ff188211c290910bcd9af7402fd429d43d281ff,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:10504e3918d5c118ab4ecc36cd79c1b3d37825111bb19ff9649d823c6048e208,State:CONTAINER_RUNNING,CreatedAt:1701892612043223232,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zgqvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 550b2491-c14
f-47c4-82d5-1301fa351305,},Annotations:map[string]string{io.kubernetes.container.hash: 654b931f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c00065611a1f7003ff8edd2ac629e3c6bbdfa4e1167b1dfd412ee16e5a9d3dcd,PodSandboxId:36098303ba1ede54bc911123c3f7b90ec68fd8ba635eb30a09d62f60386e03c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b953575c86991f5317a5f4b7e3f43c8a43d771ca400d96ba473f43c9a1ab3542,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9b5559bc9bb852fd4652513cc0d9e3992581e6c772e01d189a1803fce3912e0,State:CONTAINER_RUNNING,CreatedAt:1701892604592011690,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-989559,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62fd9ce7939ded9a9dc
2eebb729c4bb3,},Annotations:map[string]string{io.kubernetes.container.hash: c1576a6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7633ca5afa8aeb44e4c450285edb3b4ca09bd881a87e959894d7740313a7d861,PodSandboxId:97b0fcdcbb40446874b4d46b7b75e8f08eb61242ace9d5ec54352f79df39395f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1701892604275148113,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-989559,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 022bdbc807e59c6609983bd01c8f9099,},Annotations:map[string]string{io.kub
ernetes.container.hash: 918b4176,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43c8e91cea581258f9bf13a71487b9ffadc02ac982f7ff08fa649092e171fa87,PodSandboxId:7d58a8cccf3b81e6025acdc2b6eb79935f23e4d3a6e314b45148d4d94e66abc9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:b7918a1eaa37ee8f677a10278009aa059854630785fbb7306140641494aeec09,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:5f0b6e97e1c7566418dcae71143fdcfcc27c89c20f05f8f4a6c0a59c05bf62e5,State:CONTAINER_RUNNING,CreatedAt:1701892604238508876,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-989559,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5531a9e48939c123655068ed18719019,},Annotations
:map[string]string{io.kubernetes.container.hash: 7d9f5f80,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5b4ca951aec7e7912fa233772fa1e5a4cceffad95c297ea5b1b968ce835d2eb,PodSandboxId:5206c98e2d7ff44e06189fe64dc37da6581fce3f144756a657422248b7f20182,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c4a644510d6f168869cf0620dc09abd02a6fe054538f92a12c7fd7365ffb956,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:392ed8553c3109e2b84c9156b8908ef637d480b377a06656dc3f6c55252f0f31,State:CONTAINER_RUNNING,CreatedAt:1701892604110012504,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-989559,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9da0fc2c52dd0a0b10f62491f0029378,},Annotations:map[string
]string{io.kubernetes.container.hash: 50489c62,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=43faabfa-341a-422e-914e-5641260090ed name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:15:47 no-preload-989559 crio[721]: time="2023-12-06 20:15:47.217520722Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=a902fb9d-4e87-400b-853e-679e5371feaa name=/runtime.v1.RuntimeService/Version
	Dec 06 20:15:47 no-preload-989559 crio[721]: time="2023-12-06 20:15:47.217714606Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=a902fb9d-4e87-400b-853e-679e5371feaa name=/runtime.v1.RuntimeService/Version
	Dec 06 20:15:47 no-preload-989559 crio[721]: time="2023-12-06 20:15:47.220581268Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=94752ccf-b91f-4740-a245-b5e148b55a3d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 20:15:47 no-preload-989559 crio[721]: time="2023-12-06 20:15:47.221138617Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701893747221118725,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=94752ccf-b91f-4740-a245-b5e148b55a3d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 20:15:47 no-preload-989559 crio[721]: time="2023-12-06 20:15:47.222427514Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7a283414-f13b-43f9-a24a-9c5f5637d90f name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:15:47 no-preload-989559 crio[721]: time="2023-12-06 20:15:47.222490725Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7a283414-f13b-43f9-a24a-9c5f5637d90f name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:15:47 no-preload-989559 crio[721]: time="2023-12-06 20:15:47.222844036Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ec1601a49c79cb7e81473ecbb6e4506b82fc3b4942e91392653a6991579c5617,PodSandboxId:738e0ea3813b5b038dd2a87efd2e463314ae90b6ce68e5d74d84d91467982f23,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1701892643167217828,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4d98de3-12ec-47f6-a6a6-f1dc61b479be,},Annotations:map[string]string{io.kubernetes.container.hash: 92a2a5c5,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93aee471c37fc8008cd5a0ce741944f249d9cbf7a5cbb62828369850236b0f07,PodSandboxId:86500c7e690bbb411c1e6705acf9be22226888d75f882e4ae7aa0dc6481fcc6f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1701892619226302414,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-h9pkz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05501356-bf9b-4a99-a1b9-40d0caef38db,},Annotations:map[string]string{io.kubernetes.container.hash: dd425747,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2037a52e07f01097679be27c7ee8e697c886fba15f6934055f4e1af533cddb9,PodSandboxId:9bc1deb7b22d52a7ead9d48f921a87078689fa8c0d33f296602853cd62297483,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1701892616236142910,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: defaul
t,io.kubernetes.pod.uid: 73861515-9ff9-459b-888d-b551bd3eac06,},Annotations:map[string]string{io.kubernetes.container.hash: ae530940,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d07b3a050ef1964c57b85cc91a3d17646fef6d2298b91d72c007f432c51942c9,PodSandboxId:738e0ea3813b5b038dd2a87efd2e463314ae90b6ce68e5d74d84d91467982f23,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1701892612133161111,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: c4d98de3-12ec-47f6-a6a6-f1dc61b479be,},Annotations:map[string]string{io.kubernetes.container.hash: 92a2a5c5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0da9ad5d9749cba59bd11e3aa5dd332ad76b5bdd0fcde476ce333d4069f0e259,PodSandboxId:c023cca4e4bfd31ae00a2633d0a3ff041d33389bff1d668362a40abfe0eac11c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:86e0ea23640eb90027102f4e4ff188211c290910bcd9af7402fd429d43d281ff,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:10504e3918d5c118ab4ecc36cd79c1b3d37825111bb19ff9649d823c6048e208,State:CONTAINER_RUNNING,CreatedAt:1701892612043223232,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zgqvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 550b2491-c14
f-47c4-82d5-1301fa351305,},Annotations:map[string]string{io.kubernetes.container.hash: 654b931f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c00065611a1f7003ff8edd2ac629e3c6bbdfa4e1167b1dfd412ee16e5a9d3dcd,PodSandboxId:36098303ba1ede54bc911123c3f7b90ec68fd8ba635eb30a09d62f60386e03c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b953575c86991f5317a5f4b7e3f43c8a43d771ca400d96ba473f43c9a1ab3542,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9b5559bc9bb852fd4652513cc0d9e3992581e6c772e01d189a1803fce3912e0,State:CONTAINER_RUNNING,CreatedAt:1701892604592011690,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-989559,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62fd9ce7939ded9a9dc
2eebb729c4bb3,},Annotations:map[string]string{io.kubernetes.container.hash: c1576a6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7633ca5afa8aeb44e4c450285edb3b4ca09bd881a87e959894d7740313a7d861,PodSandboxId:97b0fcdcbb40446874b4d46b7b75e8f08eb61242ace9d5ec54352f79df39395f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1701892604275148113,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-989559,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 022bdbc807e59c6609983bd01c8f9099,},Annotations:map[string]string{io.kub
ernetes.container.hash: 918b4176,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43c8e91cea581258f9bf13a71487b9ffadc02ac982f7ff08fa649092e171fa87,PodSandboxId:7d58a8cccf3b81e6025acdc2b6eb79935f23e4d3a6e314b45148d4d94e66abc9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:b7918a1eaa37ee8f677a10278009aa059854630785fbb7306140641494aeec09,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:5f0b6e97e1c7566418dcae71143fdcfcc27c89c20f05f8f4a6c0a59c05bf62e5,State:CONTAINER_RUNNING,CreatedAt:1701892604238508876,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-989559,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5531a9e48939c123655068ed18719019,},Annotations
:map[string]string{io.kubernetes.container.hash: 7d9f5f80,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5b4ca951aec7e7912fa233772fa1e5a4cceffad95c297ea5b1b968ce835d2eb,PodSandboxId:5206c98e2d7ff44e06189fe64dc37da6581fce3f144756a657422248b7f20182,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c4a644510d6f168869cf0620dc09abd02a6fe054538f92a12c7fd7365ffb956,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:392ed8553c3109e2b84c9156b8908ef637d480b377a06656dc3f6c55252f0f31,State:CONTAINER_RUNNING,CreatedAt:1701892604110012504,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-989559,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9da0fc2c52dd0a0b10f62491f0029378,},Annotations:map[string
]string{io.kubernetes.container.hash: 50489c62,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7a283414-f13b-43f9-a24a-9c5f5637d90f name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:15:47 no-preload-989559 crio[721]: time="2023-12-06 20:15:47.238321773Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=c06b685e-f05e-4ebb-bd68-09915de969bf name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 06 20:15:47 no-preload-989559 crio[721]: time="2023-12-06 20:15:47.238523893Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:86500c7e690bbb411c1e6705acf9be22226888d75f882e4ae7aa0dc6481fcc6f,Metadata:&PodSandboxMetadata{Name:coredns-76f75df574-h9pkz,Uid:05501356-bf9b-4a99-a1b9-40d0caef38db,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701892618577793840,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-76f75df574-h9pkz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05501356-bf9b-4a99-a1b9-40d0caef38db,k8s-app: kube-dns,pod-template-hash: 76f75df574,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-06T19:56:50.878541730Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:769ef77ea6f67f93bdff349850a18fe2dacdfc176b0d7d2dbdb94f725f6166aa,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-vz7qc,Uid:97c1bcd2-eabc-4029-bb02-5bbfd4d96c0f,Namespace:
kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701892614977707207,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-vz7qc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97c1bcd2-eabc-4029-bb02-5bbfd4d96c0f,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-06T19:56:50.878545425Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9bc1deb7b22d52a7ead9d48f921a87078689fa8c0d33f296602853cd62297483,Metadata:&PodSandboxMetadata{Name:busybox,Uid:73861515-9ff9-459b-888d-b551bd3eac06,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701892614567331105,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 73861515-9ff9-459b-888d-b551bd3eac06,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-06T19:56:50.8
78531291Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:738e0ea3813b5b038dd2a87efd2e463314ae90b6ce68e5d74d84d91467982f23,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:c4d98de3-12ec-47f6-a6a6-f1dc61b479be,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701892611225145186,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4d98de3-12ec-47f6-a6a6-f1dc61b479be,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-m
inikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-12-06T19:56:50.878540614Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c023cca4e4bfd31ae00a2633d0a3ff041d33389bff1d668362a40abfe0eac11c,Metadata:&PodSandboxMetadata{Name:kube-proxy-zgqvt,Uid:550b2491-c14f-47c4-82d5-1301fa351305,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701892611221733272,Labels:map[string]string{controller-revision-hash: 6b54b954d8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-zgqvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 550b2491-c14f-47c4-82d5-1301fa351305,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io
/config.seen: 2023-12-06T19:56:50.878538313Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:97b0fcdcbb40446874b4d46b7b75e8f08eb61242ace9d5ec54352f79df39395f,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-989559,Uid:022bdbc807e59c6609983bd01c8f9099,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701892603430820765,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-989559,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 022bdbc807e59c6609983bd01c8f9099,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.5:2379,kubernetes.io/config.hash: 022bdbc807e59c6609983bd01c8f9099,kubernetes.io/config.seen: 2023-12-06T19:56:42.878537771Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:36098303ba1ede54bc911123c3f7b90ec68fd8ba635eb30a09d62f60386e03c1,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-989559,Ui
d:62fd9ce7939ded9a9dc2eebb729c4bb3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701892603424858118,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-no-preload-989559,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62fd9ce7939ded9a9dc2eebb729c4bb3,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 62fd9ce7939ded9a9dc2eebb729c4bb3,kubernetes.io/config.seen: 2023-12-06T19:56:42.878536936Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5206c98e2d7ff44e06189fe64dc37da6581fce3f144756a657422248b7f20182,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-989559,Uid:9da0fc2c52dd0a0b10f62491f0029378,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701892603420269203,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-989559,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: 9da0fc2c52dd0a0b10f62491f0029378,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.5:8443,kubernetes.io/config.hash: 9da0fc2c52dd0a0b10f62491f0029378,kubernetes.io/config.seen: 2023-12-06T19:56:42.878532399Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7d58a8cccf3b81e6025acdc2b6eb79935f23e4d3a6e314b45148d4d94e66abc9,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-no-preload-989559,Uid:5531a9e48939c123655068ed18719019,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701892603417481580,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-989559,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5531a9e48939c123655068ed18719019,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 5531a9e48939c123655068ed18719019,kubern
etes.io/config.seen: 2023-12-06T19:56:42.878536071Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=c06b685e-f05e-4ebb-bd68-09915de969bf name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 06 20:15:47 no-preload-989559 crio[721]: time="2023-12-06 20:15:47.239707160Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e2a0510a-8771-422b-b396-e8b6a26f3bcc name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:15:47 no-preload-989559 crio[721]: time="2023-12-06 20:15:47.239762804Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e2a0510a-8771-422b-b396-e8b6a26f3bcc name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:15:47 no-preload-989559 crio[721]: time="2023-12-06 20:15:47.239975035Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ec1601a49c79cb7e81473ecbb6e4506b82fc3b4942e91392653a6991579c5617,PodSandboxId:738e0ea3813b5b038dd2a87efd2e463314ae90b6ce68e5d74d84d91467982f23,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1701892643167217828,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4d98de3-12ec-47f6-a6a6-f1dc61b479be,},Annotations:map[string]string{io.kubernetes.container.hash: 92a2a5c5,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93aee471c37fc8008cd5a0ce741944f249d9cbf7a5cbb62828369850236b0f07,PodSandboxId:86500c7e690bbb411c1e6705acf9be22226888d75f882e4ae7aa0dc6481fcc6f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1701892619226302414,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-h9pkz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05501356-bf9b-4a99-a1b9-40d0caef38db,},Annotations:map[string]string{io.kubernetes.container.hash: dd425747,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2037a52e07f01097679be27c7ee8e697c886fba15f6934055f4e1af533cddb9,PodSandboxId:9bc1deb7b22d52a7ead9d48f921a87078689fa8c0d33f296602853cd62297483,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1701892616236142910,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: defaul
t,io.kubernetes.pod.uid: 73861515-9ff9-459b-888d-b551bd3eac06,},Annotations:map[string]string{io.kubernetes.container.hash: ae530940,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d07b3a050ef1964c57b85cc91a3d17646fef6d2298b91d72c007f432c51942c9,PodSandboxId:738e0ea3813b5b038dd2a87efd2e463314ae90b6ce68e5d74d84d91467982f23,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1701892612133161111,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: c4d98de3-12ec-47f6-a6a6-f1dc61b479be,},Annotations:map[string]string{io.kubernetes.container.hash: 92a2a5c5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0da9ad5d9749cba59bd11e3aa5dd332ad76b5bdd0fcde476ce333d4069f0e259,PodSandboxId:c023cca4e4bfd31ae00a2633d0a3ff041d33389bff1d668362a40abfe0eac11c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:86e0ea23640eb90027102f4e4ff188211c290910bcd9af7402fd429d43d281ff,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:10504e3918d5c118ab4ecc36cd79c1b3d37825111bb19ff9649d823c6048e208,State:CONTAINER_RUNNING,CreatedAt:1701892612043223232,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zgqvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 550b2491-c14
f-47c4-82d5-1301fa351305,},Annotations:map[string]string{io.kubernetes.container.hash: 654b931f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c00065611a1f7003ff8edd2ac629e3c6bbdfa4e1167b1dfd412ee16e5a9d3dcd,PodSandboxId:36098303ba1ede54bc911123c3f7b90ec68fd8ba635eb30a09d62f60386e03c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b953575c86991f5317a5f4b7e3f43c8a43d771ca400d96ba473f43c9a1ab3542,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9b5559bc9bb852fd4652513cc0d9e3992581e6c772e01d189a1803fce3912e0,State:CONTAINER_RUNNING,CreatedAt:1701892604592011690,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-989559,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62fd9ce7939ded9a9dc
2eebb729c4bb3,},Annotations:map[string]string{io.kubernetes.container.hash: c1576a6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7633ca5afa8aeb44e4c450285edb3b4ca09bd881a87e959894d7740313a7d861,PodSandboxId:97b0fcdcbb40446874b4d46b7b75e8f08eb61242ace9d5ec54352f79df39395f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1701892604275148113,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-989559,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 022bdbc807e59c6609983bd01c8f9099,},Annotations:map[string]string{io.kub
ernetes.container.hash: 918b4176,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43c8e91cea581258f9bf13a71487b9ffadc02ac982f7ff08fa649092e171fa87,PodSandboxId:7d58a8cccf3b81e6025acdc2b6eb79935f23e4d3a6e314b45148d4d94e66abc9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:b7918a1eaa37ee8f677a10278009aa059854630785fbb7306140641494aeec09,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:5f0b6e97e1c7566418dcae71143fdcfcc27c89c20f05f8f4a6c0a59c05bf62e5,State:CONTAINER_RUNNING,CreatedAt:1701892604238508876,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-989559,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5531a9e48939c123655068ed18719019,},Annotations
:map[string]string{io.kubernetes.container.hash: 7d9f5f80,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5b4ca951aec7e7912fa233772fa1e5a4cceffad95c297ea5b1b968ce835d2eb,PodSandboxId:5206c98e2d7ff44e06189fe64dc37da6581fce3f144756a657422248b7f20182,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c4a644510d6f168869cf0620dc09abd02a6fe054538f92a12c7fd7365ffb956,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:392ed8553c3109e2b84c9156b8908ef637d480b377a06656dc3f6c55252f0f31,State:CONTAINER_RUNNING,CreatedAt:1701892604110012504,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-989559,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9da0fc2c52dd0a0b10f62491f0029378,},Annotations:map[string
]string{io.kubernetes.container.hash: 50489c62,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e2a0510a-8771-422b-b396-e8b6a26f3bcc name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:15:47 no-preload-989559 crio[721]: time="2023-12-06 20:15:47.269881764Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=55bca4de-8184-4abb-b518-b662608dd80d name=/runtime.v1.RuntimeService/Version
	Dec 06 20:15:47 no-preload-989559 crio[721]: time="2023-12-06 20:15:47.269935438Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=55bca4de-8184-4abb-b518-b662608dd80d name=/runtime.v1.RuntimeService/Version
	Dec 06 20:15:47 no-preload-989559 crio[721]: time="2023-12-06 20:15:47.271908394Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=7bba8e26-d4d6-40f6-a45e-714256425090 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 20:15:47 no-preload-989559 crio[721]: time="2023-12-06 20:15:47.272218847Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701893747272205900,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=7bba8e26-d4d6-40f6-a45e-714256425090 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 20:15:47 no-preload-989559 crio[721]: time="2023-12-06 20:15:47.272770084Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6afeef5b-a5b4-41bd-aa43-059103a312e8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:15:47 no-preload-989559 crio[721]: time="2023-12-06 20:15:47.272816034Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6afeef5b-a5b4-41bd-aa43-059103a312e8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:15:47 no-preload-989559 crio[721]: time="2023-12-06 20:15:47.272990689Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ec1601a49c79cb7e81473ecbb6e4506b82fc3b4942e91392653a6991579c5617,PodSandboxId:738e0ea3813b5b038dd2a87efd2e463314ae90b6ce68e5d74d84d91467982f23,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1701892643167217828,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4d98de3-12ec-47f6-a6a6-f1dc61b479be,},Annotations:map[string]string{io.kubernetes.container.hash: 92a2a5c5,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93aee471c37fc8008cd5a0ce741944f249d9cbf7a5cbb62828369850236b0f07,PodSandboxId:86500c7e690bbb411c1e6705acf9be22226888d75f882e4ae7aa0dc6481fcc6f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1701892619226302414,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-h9pkz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05501356-bf9b-4a99-a1b9-40d0caef38db,},Annotations:map[string]string{io.kubernetes.container.hash: dd425747,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2037a52e07f01097679be27c7ee8e697c886fba15f6934055f4e1af533cddb9,PodSandboxId:9bc1deb7b22d52a7ead9d48f921a87078689fa8c0d33f296602853cd62297483,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1701892616236142910,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: defaul
t,io.kubernetes.pod.uid: 73861515-9ff9-459b-888d-b551bd3eac06,},Annotations:map[string]string{io.kubernetes.container.hash: ae530940,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d07b3a050ef1964c57b85cc91a3d17646fef6d2298b91d72c007f432c51942c9,PodSandboxId:738e0ea3813b5b038dd2a87efd2e463314ae90b6ce68e5d74d84d91467982f23,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1701892612133161111,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: c4d98de3-12ec-47f6-a6a6-f1dc61b479be,},Annotations:map[string]string{io.kubernetes.container.hash: 92a2a5c5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0da9ad5d9749cba59bd11e3aa5dd332ad76b5bdd0fcde476ce333d4069f0e259,PodSandboxId:c023cca4e4bfd31ae00a2633d0a3ff041d33389bff1d668362a40abfe0eac11c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:86e0ea23640eb90027102f4e4ff188211c290910bcd9af7402fd429d43d281ff,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:10504e3918d5c118ab4ecc36cd79c1b3d37825111bb19ff9649d823c6048e208,State:CONTAINER_RUNNING,CreatedAt:1701892612043223232,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zgqvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 550b2491-c14
f-47c4-82d5-1301fa351305,},Annotations:map[string]string{io.kubernetes.container.hash: 654b931f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c00065611a1f7003ff8edd2ac629e3c6bbdfa4e1167b1dfd412ee16e5a9d3dcd,PodSandboxId:36098303ba1ede54bc911123c3f7b90ec68fd8ba635eb30a09d62f60386e03c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b953575c86991f5317a5f4b7e3f43c8a43d771ca400d96ba473f43c9a1ab3542,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9b5559bc9bb852fd4652513cc0d9e3992581e6c772e01d189a1803fce3912e0,State:CONTAINER_RUNNING,CreatedAt:1701892604592011690,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-989559,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62fd9ce7939ded9a9dc
2eebb729c4bb3,},Annotations:map[string]string{io.kubernetes.container.hash: c1576a6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7633ca5afa8aeb44e4c450285edb3b4ca09bd881a87e959894d7740313a7d861,PodSandboxId:97b0fcdcbb40446874b4d46b7b75e8f08eb61242ace9d5ec54352f79df39395f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1701892604275148113,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-989559,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 022bdbc807e59c6609983bd01c8f9099,},Annotations:map[string]string{io.kub
ernetes.container.hash: 918b4176,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43c8e91cea581258f9bf13a71487b9ffadc02ac982f7ff08fa649092e171fa87,PodSandboxId:7d58a8cccf3b81e6025acdc2b6eb79935f23e4d3a6e314b45148d4d94e66abc9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:b7918a1eaa37ee8f677a10278009aa059854630785fbb7306140641494aeec09,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:5f0b6e97e1c7566418dcae71143fdcfcc27c89c20f05f8f4a6c0a59c05bf62e5,State:CONTAINER_RUNNING,CreatedAt:1701892604238508876,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-989559,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5531a9e48939c123655068ed18719019,},Annotations
:map[string]string{io.kubernetes.container.hash: 7d9f5f80,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5b4ca951aec7e7912fa233772fa1e5a4cceffad95c297ea5b1b968ce835d2eb,PodSandboxId:5206c98e2d7ff44e06189fe64dc37da6581fce3f144756a657422248b7f20182,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c4a644510d6f168869cf0620dc09abd02a6fe054538f92a12c7fd7365ffb956,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:392ed8553c3109e2b84c9156b8908ef637d480b377a06656dc3f6c55252f0f31,State:CONTAINER_RUNNING,CreatedAt:1701892604110012504,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-989559,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9da0fc2c52dd0a0b10f62491f0029378,},Annotations:map[string
]string{io.kubernetes.container.hash: 50489c62,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6afeef5b-a5b4-41bd-aa43-059103a312e8 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ec1601a49c79c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      18 minutes ago      Running             storage-provisioner       2                   738e0ea3813b5       storage-provisioner
	93aee471c37fc       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      18 minutes ago      Running             coredns                   1                   86500c7e690bb       coredns-76f75df574-h9pkz
	e2037a52e07f0       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   18 minutes ago      Running             busybox                   1                   9bc1deb7b22d5       busybox
	d07b3a050ef19       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      18 minutes ago      Exited              storage-provisioner       1                   738e0ea3813b5       storage-provisioner
	0da9ad5d9749c       86e0ea23640eb90027102f4e4ff188211c290910bcd9af7402fd429d43d281ff                                      18 minutes ago      Running             kube-proxy                1                   c023cca4e4bfd       kube-proxy-zgqvt
	c00065611a1f7       b953575c86991f5317a5f4b7e3f43c8a43d771ca400d96ba473f43c9a1ab3542                                      19 minutes ago      Running             kube-scheduler            1                   36098303ba1ed       kube-scheduler-no-preload-989559
	7633ca5afa8ae       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7                                      19 minutes ago      Running             etcd                      1                   97b0fcdcbb404       etcd-no-preload-989559
	43c8e91cea581       b7918a1eaa37ee8f677a10278009aa059854630785fbb7306140641494aeec09                                      19 minutes ago      Running             kube-controller-manager   1                   7d58a8cccf3b8       kube-controller-manager-no-preload-989559
	f5b4ca951aec7       5c4a644510d6f168869cf0620dc09abd02a6fe054538f92a12c7fd7365ffb956                                      19 minutes ago      Running             kube-apiserver            1                   5206c98e2d7ff       kube-apiserver-no-preload-989559
	
	* 
	* ==> coredns [93aee471c37fc8008cd5a0ce741944f249d9cbf7a5cbb62828369850236b0f07] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:42717 - 22482 "HINFO IN 3959492625878978717.4147345806210626056. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.027901011s
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-989559
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-989559
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=31a3600ce72029d920a55140bbc6d0705e357503
	                    minikube.k8s.io/name=no-preload-989559
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_06T19_47_06_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 06 Dec 2023 19:47:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-989559
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 06 Dec 2023 20:15:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 06 Dec 2023 20:12:39 +0000   Wed, 06 Dec 2023 19:47:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 06 Dec 2023 20:12:39 +0000   Wed, 06 Dec 2023 19:47:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 06 Dec 2023 20:12:39 +0000   Wed, 06 Dec 2023 19:47:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 06 Dec 2023 20:12:39 +0000   Wed, 06 Dec 2023 19:57:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.5
	  Hostname:    no-preload-989559
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 f68799b222e5492590de8f6722e893a0
	  System UUID:                f68799b2-22e5-4925-90de-8f6722e893a0
	  Boot ID:                    ea5532e0-30f2-4abf-a496-684a2ba5aa4c
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.29.0-rc.1
	  Kube-Proxy Version:         v1.29.0-rc.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 coredns-76f75df574-h9pkz                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 etcd-no-preload-989559                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 kube-apiserver-no-preload-989559             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-controller-manager-no-preload-989559    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-proxy-zgqvt                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-no-preload-989559             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 metrics-server-57f55c9bc5-vz7qc              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 18m                kube-proxy       
	  Normal  NodeHasSufficientPID     28m (x7 over 28m)  kubelet          Node no-preload-989559 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    28m (x8 over 28m)  kubelet          Node no-preload-989559 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  28m (x8 over 28m)  kubelet          Node no-preload-989559 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node no-preload-989559 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node no-preload-989559 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m                kubelet          Node no-preload-989559 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                28m                kubelet          Node no-preload-989559 status is now: NodeReady
	  Normal  RegisteredNode           28m                node-controller  Node no-preload-989559 event: Registered Node no-preload-989559 in Controller
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node no-preload-989559 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node no-preload-989559 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node no-preload-989559 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           18m                node-controller  Node no-preload-989559 event: Registered Node no-preload-989559 in Controller
	
	* 
	* ==> dmesg <==
	* [Dec 6 19:55] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000002] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.076119] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.953767] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.569585] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.165097] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[Dec 6 19:56] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000015] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.457146] systemd-fstab-generator[646]: Ignoring "noauto" for root device
	[  +0.157919] systemd-fstab-generator[657]: Ignoring "noauto" for root device
	[  +0.165544] systemd-fstab-generator[670]: Ignoring "noauto" for root device
	[  +0.112495] systemd-fstab-generator[681]: Ignoring "noauto" for root device
	[  +0.240262] systemd-fstab-generator[705]: Ignoring "noauto" for root device
	[ +30.371066] systemd-fstab-generator[1341]: Ignoring "noauto" for root device
	[ +16.093619] kauditd_printk_skb: 24 callbacks suppressed
	
	* 
	* ==> etcd [7633ca5afa8aeb44e4c450285edb3b4ca09bd881a87e959894d7740313a7d861] <==
	* {"level":"info","ts":"2023-12-06T19:56:48.369982Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c5263387c79c0223 received MsgVoteResp from c5263387c79c0223 at term 3"}
	{"level":"info","ts":"2023-12-06T19:56:48.370018Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c5263387c79c0223 became leader at term 3"}
	{"level":"info","ts":"2023-12-06T19:56:48.370054Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c5263387c79c0223 elected leader c5263387c79c0223 at term 3"}
	{"level":"info","ts":"2023-12-06T19:56:48.371887Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"c5263387c79c0223","local-member-attributes":"{Name:no-preload-989559 ClientURLs:[https://192.168.39.5:2379]}","request-path":"/0/members/c5263387c79c0223/attributes","cluster-id":"436188ec3031a10e","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-06T19:56:48.371959Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-06T19:56:48.371905Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-06T19:56:48.373011Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-06T19:56:48.373066Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-06T19:56:48.375778Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-06T19:56:48.375824Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.5:2379"}
	{"level":"info","ts":"2023-12-06T20:06:48.410047Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":827}
	{"level":"info","ts":"2023-12-06T20:06:48.41337Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":827,"took":"2.500425ms","hash":625595744}
	{"level":"info","ts":"2023-12-06T20:06:48.413507Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":625595744,"revision":827,"compact-revision":-1}
	{"level":"info","ts":"2023-12-06T20:11:48.419956Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1070}
	{"level":"info","ts":"2023-12-06T20:11:48.422102Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1070,"took":"1.253208ms","hash":1685019198}
	{"level":"info","ts":"2023-12-06T20:11:48.422248Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1685019198,"revision":1070,"compact-revision":827}
	{"level":"warn","ts":"2023-12-06T20:15:41.609087Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.835675ms","expected-duration":"100ms","prefix":"","request":"header:<ID:154121021791332263 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:02238c40b44ae7a6>","response":"size:40"}
	{"level":"info","ts":"2023-12-06T20:15:41.609548Z","caller":"traceutil/trace.go:171","msg":"trace[1146748944] linearizableReadLoop","detail":"{readStateIndex:1764; appliedIndex:1763; }","duration":"160.474986ms","start":"2023-12-06T20:15:41.449033Z","end":"2023-12-06T20:15:41.609508Z","steps":["trace[1146748944] 'read index received'  (duration: 31.823931ms)","trace[1146748944] 'applied index is now lower than readState.Index'  (duration: 128.649836ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-06T20:15:41.609785Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"160.741265ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-06T20:15:41.609826Z","caller":"traceutil/trace.go:171","msg":"trace[2062001751] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1502; }","duration":"160.807639ms","start":"2023-12-06T20:15:41.449001Z","end":"2023-12-06T20:15:41.609808Z","steps":["trace[2062001751] 'agreement among raft nodes before linearized reading'  (duration: 160.715019ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-06T20:15:41.87279Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.693387ms","expected-duration":"100ms","prefix":"","request":"header:<ID:154121021791332265 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.39.5\" mod_revision:1495 > success:<request_put:<key:\"/registry/masterleases/192.168.39.5\" value_size:65 lease:154121021791332262 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.5\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-12-06T20:15:41.873002Z","caller":"traceutil/trace.go:171","msg":"trace[1744490431] linearizableReadLoop","detail":"{readStateIndex:1765; appliedIndex:1764; }","duration":"261.983133ms","start":"2023-12-06T20:15:41.611006Z","end":"2023-12-06T20:15:41.872989Z","steps":["trace[1744490431] 'read index received'  (duration: 128.353414ms)","trace[1744490431] 'applied index is now lower than readState.Index'  (duration: 133.62832ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-06T20:15:41.873115Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"262.168451ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-06T20:15:41.873118Z","caller":"traceutil/trace.go:171","msg":"trace[1808254788] transaction","detail":"{read_only:false; response_revision:1503; number_of_response:1; }","duration":"262.255453ms","start":"2023-12-06T20:15:41.610838Z","end":"2023-12-06T20:15:41.873093Z","steps":["trace[1808254788] 'process raft request'  (duration: 128.567302ms)","trace[1808254788] 'compare'  (duration: 132.607986ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-06T20:15:41.873154Z","caller":"traceutil/trace.go:171","msg":"trace[421705591] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1503; }","duration":"262.214243ms","start":"2023-12-06T20:15:41.610933Z","end":"2023-12-06T20:15:41.873148Z","steps":["trace[421705591] 'agreement among raft nodes before linearized reading'  (duration: 262.122047ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  20:15:47 up 19 min,  0 users,  load average: 0.07, 0.10, 0.10
	Linux no-preload-989559 5.10.57 #1 SMP Fri Dec 1 04:24:04 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [f5b4ca951aec7e7912fa233772fa1e5a4cceffad95c297ea5b1b968ce835d2eb] <==
	* W1206 20:11:50.852772       1 handler_proxy.go:93] no RequestInfo found in the context
	E1206 20:11:50.852843       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1206 20:11:50.852855       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1206 20:11:50.852974       1 handler_proxy.go:93] no RequestInfo found in the context
	E1206 20:11:50.853087       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1206 20:11:50.854042       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1206 20:12:50.853089       1 handler_proxy.go:93] no RequestInfo found in the context
	E1206 20:12:50.853137       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1206 20:12:50.853145       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1206 20:12:50.854354       1 handler_proxy.go:93] no RequestInfo found in the context
	E1206 20:12:50.854513       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1206 20:12:50.854558       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1206 20:14:50.853754       1 handler_proxy.go:93] no RequestInfo found in the context
	E1206 20:14:50.853818       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1206 20:14:50.853827       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1206 20:14:50.855455       1 handler_proxy.go:93] no RequestInfo found in the context
	E1206 20:14:50.855530       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1206 20:14:50.855536       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1206 20:15:41.875382       1 trace.go:236] Trace[1656936607]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/192.168.39.5,type:*v1.Endpoints,resource:apiServerIPInfo (06-Dec-2023 20:15:41.300) (total time: 573ms):
	Trace[1656936607]: ---"Transaction prepared" 255ms (20:15:41.610)
	Trace[1656936607]: ---"Txn call completed" 263ms (20:15:41.874)
	Trace[1656936607]: [573.881689ms] [573.881689ms] END
	
	* 
	* ==> kube-controller-manager [43c8e91cea581258f9bf13a71487b9ffadc02ac982f7ff08fa649092e171fa87] <==
	* I1206 20:10:03.200436       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1206 20:10:32.699695       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1206 20:10:33.209417       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1206 20:11:02.705541       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1206 20:11:03.218544       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1206 20:11:32.712475       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1206 20:11:33.227135       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1206 20:12:02.720556       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1206 20:12:03.235485       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1206 20:12:32.726452       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1206 20:12:33.245161       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1206 20:13:02.731714       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1206 20:13:03.256057       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1206 20:13:12.964872       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="326.145µs"
	I1206 20:13:23.961946       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="134.027µs"
	E1206 20:13:32.736455       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1206 20:13:33.264837       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1206 20:14:02.741923       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1206 20:14:03.273839       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1206 20:14:32.748538       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1206 20:14:33.283682       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1206 20:15:02.754503       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1206 20:15:03.292490       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1206 20:15:32.760370       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1206 20:15:33.304365       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [0da9ad5d9749cba59bd11e3aa5dd332ad76b5bdd0fcde476ce333d4069f0e259] <==
	* I1206 19:56:52.350661       1 server_others.go:72] "Using iptables proxy"
	I1206 19:56:52.360128       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.5"]
	I1206 19:56:52.415739       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I1206 19:56:52.415789       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1206 19:56:52.415802       1 server_others.go:168] "Using iptables Proxier"
	I1206 19:56:52.419388       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1206 19:56:52.419910       1 server.go:865] "Version info" version="v1.29.0-rc.1"
	I1206 19:56:52.420033       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 19:56:52.425133       1 config.go:188] "Starting service config controller"
	I1206 19:56:52.425182       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1206 19:56:52.425204       1 config.go:97] "Starting endpoint slice config controller"
	I1206 19:56:52.425242       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1206 19:56:52.429544       1 config.go:315] "Starting node config controller"
	I1206 19:56:52.429723       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1206 19:56:52.525467       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1206 19:56:52.525540       1 shared_informer.go:318] Caches are synced for service config
	I1206 19:56:52.529951       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [c00065611a1f7003ff8edd2ac629e3c6bbdfa4e1167b1dfd412ee16e5a9d3dcd] <==
	* I1206 19:56:47.584232       1 serving.go:380] Generated self-signed cert in-memory
	W1206 19:56:49.783146       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1206 19:56:49.783393       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1206 19:56:49.783410       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1206 19:56:49.783416       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1206 19:56:49.883249       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.0-rc.1"
	I1206 19:56:49.883302       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 19:56:49.886556       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1206 19:56:49.890002       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 19:56:49.890072       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1206 19:56:49.890416       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1206 19:56:49.991022       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-12-06 19:55:59 UTC, ends at Wed 2023-12-06 20:15:47 UTC. --
	Dec 06 20:12:58 no-preload-989559 kubelet[1347]: E1206 20:12:58.000996    1347 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-vz7qc" podUID="97c1bcd2-eabc-4029-bb02-5bbfd4d96c0f"
	Dec 06 20:13:12 no-preload-989559 kubelet[1347]: E1206 20:13:12.946145    1347 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vz7qc" podUID="97c1bcd2-eabc-4029-bb02-5bbfd4d96c0f"
	Dec 06 20:13:23 no-preload-989559 kubelet[1347]: E1206 20:13:23.944528    1347 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vz7qc" podUID="97c1bcd2-eabc-4029-bb02-5bbfd4d96c0f"
	Dec 06 20:13:37 no-preload-989559 kubelet[1347]: E1206 20:13:37.944700    1347 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vz7qc" podUID="97c1bcd2-eabc-4029-bb02-5bbfd4d96c0f"
	Dec 06 20:13:42 no-preload-989559 kubelet[1347]: E1206 20:13:42.980011    1347 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 06 20:13:42 no-preload-989559 kubelet[1347]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 06 20:13:42 no-preload-989559 kubelet[1347]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 06 20:13:42 no-preload-989559 kubelet[1347]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 06 20:13:51 no-preload-989559 kubelet[1347]: E1206 20:13:51.944811    1347 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vz7qc" podUID="97c1bcd2-eabc-4029-bb02-5bbfd4d96c0f"
	Dec 06 20:14:05 no-preload-989559 kubelet[1347]: E1206 20:14:05.943708    1347 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vz7qc" podUID="97c1bcd2-eabc-4029-bb02-5bbfd4d96c0f"
	Dec 06 20:14:16 no-preload-989559 kubelet[1347]: E1206 20:14:16.944689    1347 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vz7qc" podUID="97c1bcd2-eabc-4029-bb02-5bbfd4d96c0f"
	Dec 06 20:14:31 no-preload-989559 kubelet[1347]: E1206 20:14:31.943826    1347 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vz7qc" podUID="97c1bcd2-eabc-4029-bb02-5bbfd4d96c0f"
	Dec 06 20:14:42 no-preload-989559 kubelet[1347]: E1206 20:14:42.982009    1347 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 06 20:14:42 no-preload-989559 kubelet[1347]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 06 20:14:42 no-preload-989559 kubelet[1347]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 06 20:14:42 no-preload-989559 kubelet[1347]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 06 20:14:45 no-preload-989559 kubelet[1347]: E1206 20:14:45.945017    1347 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vz7qc" podUID="97c1bcd2-eabc-4029-bb02-5bbfd4d96c0f"
	Dec 06 20:15:00 no-preload-989559 kubelet[1347]: E1206 20:15:00.944557    1347 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vz7qc" podUID="97c1bcd2-eabc-4029-bb02-5bbfd4d96c0f"
	Dec 06 20:15:13 no-preload-989559 kubelet[1347]: E1206 20:15:13.944486    1347 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vz7qc" podUID="97c1bcd2-eabc-4029-bb02-5bbfd4d96c0f"
	Dec 06 20:15:24 no-preload-989559 kubelet[1347]: E1206 20:15:24.944690    1347 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vz7qc" podUID="97c1bcd2-eabc-4029-bb02-5bbfd4d96c0f"
	Dec 06 20:15:38 no-preload-989559 kubelet[1347]: E1206 20:15:38.944495    1347 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vz7qc" podUID="97c1bcd2-eabc-4029-bb02-5bbfd4d96c0f"
	Dec 06 20:15:42 no-preload-989559 kubelet[1347]: E1206 20:15:42.980566    1347 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 06 20:15:42 no-preload-989559 kubelet[1347]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 06 20:15:42 no-preload-989559 kubelet[1347]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 06 20:15:42 no-preload-989559 kubelet[1347]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	* 
	* ==> storage-provisioner [d07b3a050ef1964c57b85cc91a3d17646fef6d2298b91d72c007f432c51942c9] <==
	* I1206 19:56:52.347280       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1206 19:57:22.350162       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	* 
	* ==> storage-provisioner [ec1601a49c79cb7e81473ecbb6e4506b82fc3b4942e91392653a6991579c5617] <==
	* I1206 19:57:23.295534       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1206 19:57:23.307526       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1206 19:57:23.307782       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1206 19:57:23.319318       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1206 19:57:23.319538       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-989559_33c51682-fe10-45ce-b932-59ec894aaf43!
	I1206 19:57:23.319389       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fe0beb93-637f-469a-88e2-6358f219c300", APIVersion:"v1", ResourceVersion:"594", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-989559_33c51682-fe10-45ce-b932-59ec894aaf43 became leader
	I1206 19:57:23.420399       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-989559_33c51682-fe10-45ce-b932-59ec894aaf43!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-989559 -n no-preload-989559
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-989559 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-vz7qc
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-989559 describe pod metrics-server-57f55c9bc5-vz7qc
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-989559 describe pod metrics-server-57f55c9bc5-vz7qc: exit status 1 (76.51028ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-vz7qc" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-989559 describe pod metrics-server-57f55c9bc5-vz7qc: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (327.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (246.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1206 20:11:27.859912   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/bridge-459609/client.crt: no such file or directory
E1206 20:12:54.631895   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/functional-317483/client.crt: no such file or directory
E1206 20:13:02.204274   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/kindnet-459609/client.crt: no such file or directory
E1206 20:13:08.166611   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/auto-459609/client.crt: no such file or directory
E1206 20:13:22.657749   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/addons-463584/client.crt: no such file or directory
E1206 20:13:58.794297   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/calico-459609/client.crt: no such file or directory
E1206 20:14:34.367718   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/custom-flannel-459609/client.crt: no such file or directory
E1206 20:14:49.042087   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/enable-default-cni-459609/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-448851 -n old-k8s-version-448851
start_stop_delete_test.go:287: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-12-06 20:15:05.861245713 +0000 UTC m=+5679.945739557
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-448851 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-448851 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.787µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-448851 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-448851 -n old-k8s-version-448851
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-448851 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-448851 logs -n 25: (1.677686271s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-459609 sudo cat                              | bridge-459609                | jenkins | v1.32.0 | 06 Dec 23 19:46 UTC | 06 Dec 23 19:46 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-459609 sudo                                  | bridge-459609                | jenkins | v1.32.0 | 06 Dec 23 19:46 UTC | 06 Dec 23 19:46 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-459609 sudo                                  | bridge-459609                | jenkins | v1.32.0 | 06 Dec 23 19:46 UTC | 06 Dec 23 19:46 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-459609 sudo                                  | bridge-459609                | jenkins | v1.32.0 | 06 Dec 23 19:46 UTC | 06 Dec 23 19:46 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-459609 sudo find                             | bridge-459609                | jenkins | v1.32.0 | 06 Dec 23 19:46 UTC | 06 Dec 23 19:46 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-459609 sudo crio                             | bridge-459609                | jenkins | v1.32.0 | 06 Dec 23 19:46 UTC | 06 Dec 23 19:46 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-459609                                       | bridge-459609                | jenkins | v1.32.0 | 06 Dec 23 19:46 UTC | 06 Dec 23 19:46 UTC |
	| delete  | -p                                                     | disable-driver-mounts-730405 | jenkins | v1.32.0 | 06 Dec 23 19:46 UTC | 06 Dec 23 19:46 UTC |
	|         | disable-driver-mounts-730405                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-380424 | jenkins | v1.32.0 | 06 Dec 23 19:46 UTC | 06 Dec 23 19:48 UTC |
	|         | default-k8s-diff-port-380424                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-989559             | no-preload-989559            | jenkins | v1.32.0 | 06 Dec 23 19:47 UTC | 06 Dec 23 19:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-989559                                   | no-preload-989559            | jenkins | v1.32.0 | 06 Dec 23 19:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-448851        | old-k8s-version-448851       | jenkins | v1.32.0 | 06 Dec 23 19:47 UTC | 06 Dec 23 19:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-448851                              | old-k8s-version-448851       | jenkins | v1.32.0 | 06 Dec 23 19:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-380424  | default-k8s-diff-port-380424 | jenkins | v1.32.0 | 06 Dec 23 19:48 UTC | 06 Dec 23 19:48 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-380424 | jenkins | v1.32.0 | 06 Dec 23 19:48 UTC |                     |
	|         | default-k8s-diff-port-380424                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-209025            | embed-certs-209025           | jenkins | v1.32.0 | 06 Dec 23 19:48 UTC | 06 Dec 23 19:48 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-209025                                  | embed-certs-209025           | jenkins | v1.32.0 | 06 Dec 23 19:48 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-989559                  | no-preload-989559            | jenkins | v1.32.0 | 06 Dec 23 19:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-989559                                   | no-preload-989559            | jenkins | v1.32.0 | 06 Dec 23 19:50 UTC | 06 Dec 23 20:01 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.1                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-448851             | old-k8s-version-448851       | jenkins | v1.32.0 | 06 Dec 23 19:50 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-448851                              | old-k8s-version-448851       | jenkins | v1.32.0 | 06 Dec 23 19:50 UTC | 06 Dec 23 20:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-380424       | default-k8s-diff-port-380424 | jenkins | v1.32.0 | 06 Dec 23 19:50 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-209025                 | embed-certs-209025           | jenkins | v1.32.0 | 06 Dec 23 19:50 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-380424 | jenkins | v1.32.0 | 06 Dec 23 19:50 UTC | 06 Dec 23 20:00 UTC |
	|         | default-k8s-diff-port-380424                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-209025                                  | embed-certs-209025           | jenkins | v1.32.0 | 06 Dec 23 19:50 UTC | 06 Dec 23 20:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/06 19:50:49
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 19:50:49.512923  115591 out.go:296] Setting OutFile to fd 1 ...
	I1206 19:50:49.513070  115591 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 19:50:49.513079  115591 out.go:309] Setting ErrFile to fd 2...
	I1206 19:50:49.513084  115591 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 19:50:49.513305  115591 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17740-63652/.minikube/bin
	I1206 19:50:49.513900  115591 out.go:303] Setting JSON to false
	I1206 19:50:49.514822  115591 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":9200,"bootTime":1701883050,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 19:50:49.514886  115591 start.go:138] virtualization: kvm guest
	I1206 19:50:49.517831  115591 out.go:177] * [embed-certs-209025] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1206 19:50:49.519496  115591 notify.go:220] Checking for updates...
	I1206 19:50:49.519507  115591 out.go:177]   - MINIKUBE_LOCATION=17740
	I1206 19:50:49.521356  115591 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 19:50:49.523241  115591 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17740-63652/kubeconfig
	I1206 19:50:49.525016  115591 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17740-63652/.minikube
	I1206 19:50:49.526632  115591 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 19:50:49.528148  115591 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 19:50:49.530159  115591 config.go:182] Loaded profile config "embed-certs-209025": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 19:50:49.530586  115591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:50:49.530636  115591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:50:49.545128  115591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46579
	I1206 19:50:49.545881  115591 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:50:49.547345  115591 main.go:141] libmachine: Using API Version  1
	I1206 19:50:49.547375  115591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:50:49.547739  115591 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:50:49.547926  115591 main.go:141] libmachine: (embed-certs-209025) Calling .DriverName
	I1206 19:50:49.548144  115591 driver.go:392] Setting default libvirt URI to qemu:///system
	I1206 19:50:49.548458  115591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:50:49.548506  115591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:50:49.562767  115591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42919
	I1206 19:50:49.563139  115591 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:50:49.563567  115591 main.go:141] libmachine: Using API Version  1
	I1206 19:50:49.563588  115591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:50:49.563913  115591 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:50:49.564112  115591 main.go:141] libmachine: (embed-certs-209025) Calling .DriverName
	I1206 19:50:49.600267  115591 out.go:177] * Using the kvm2 driver based on existing profile
	I1206 19:50:49.601977  115591 start.go:298] selected driver: kvm2
	I1206 19:50:49.601996  115591 start.go:902] validating driver "kvm2" against &{Name:embed-certs-209025 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-209025 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.164 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 19:50:49.602089  115591 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 19:50:49.602812  115591 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 19:50:49.602891  115591 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17740-63652/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1206 19:50:49.617831  115591 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1206 19:50:49.618234  115591 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 19:50:49.618296  115591 cni.go:84] Creating CNI manager for ""
	I1206 19:50:49.618306  115591 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 19:50:49.618316  115591 start_flags.go:323] config:
	{Name:embed-certs-209025 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-209025 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.164 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikub
e-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 19:50:49.618468  115591 iso.go:125] acquiring lock: {Name:mk6e9c7dc90243dab7d2a6f322b4b6abe4dff6ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 19:50:49.620428  115591 out.go:177] * Starting control plane node embed-certs-209025 in cluster embed-certs-209025
	I1206 19:50:46.558601  115497 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1206 19:50:46.558636  115497 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1206 19:50:46.558644  115497 cache.go:56] Caching tarball of preloaded images
	I1206 19:50:46.558714  115497 preload.go:174] Found /home/jenkins/minikube-integration/17740-63652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1206 19:50:46.558724  115497 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1206 19:50:46.558837  115497 profile.go:148] Saving config to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/default-k8s-diff-port-380424/config.json ...
	I1206 19:50:46.559024  115497 start.go:365] acquiring machines lock for default-k8s-diff-port-380424: {Name:mk49ce640266d8c664a871ed4989f65c26b6fae1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1206 19:50:49.622242  115591 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1206 19:50:49.622298  115591 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1206 19:50:49.622320  115591 cache.go:56] Caching tarball of preloaded images
	I1206 19:50:49.622419  115591 preload.go:174] Found /home/jenkins/minikube-integration/17740-63652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1206 19:50:49.622431  115591 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1206 19:50:49.622525  115591 profile.go:148] Saving config to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/embed-certs-209025/config.json ...
	I1206 19:50:49.622798  115591 start.go:365] acquiring machines lock for embed-certs-209025: {Name:mk49ce640266d8c664a871ed4989f65c26b6fae1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1206 19:50:51.693503  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:50:54.765519  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:51:00.845535  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:51:03.917509  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:51:09.997591  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:51:13.069427  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:51:19.149482  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:51:22.221565  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:51:28.301531  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:51:31.373569  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:51:37.453523  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:51:40.525531  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:51:46.605538  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:51:49.677544  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:51:55.757544  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:51:58.829552  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:52:04.909569  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:52:07.981555  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:52:14.061549  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:52:17.133576  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:52:23.213558  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:52:26.285482  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:52:32.365550  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:52:35.437574  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:52:41.517473  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:52:44.589458  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:52:50.669534  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:52:53.741496  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:52:59.821528  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:53:02.893489  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:53:08.973534  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:53:12.045527  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:53:18.125473  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:53:21.197472  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:53:27.277533  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:53:30.349580  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:53:36.429514  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:53:39.501584  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:53:45.581524  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:53:48.653547  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:53:54.733543  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:53:57.805491  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:54:03.885571  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:54:06.957565  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:54:13.037470  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:54:16.109461  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:54:22.189477  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:54:25.261563  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:54:31.341534  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:54:34.413513  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:54:40.493530  115078 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.5:22: connect: no route to host
	I1206 19:54:43.497878  115217 start.go:369] acquired machines lock for "old-k8s-version-448851" in 4m25.369261381s
	I1206 19:54:43.497937  115217 start.go:96] Skipping create...Using existing machine configuration
	I1206 19:54:43.497949  115217 fix.go:54] fixHost starting: 
	I1206 19:54:43.498301  115217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:54:43.498331  115217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:54:43.513072  115217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33051
	I1206 19:54:43.513520  115217 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:54:43.514005  115217 main.go:141] libmachine: Using API Version  1
	I1206 19:54:43.514035  115217 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:54:43.514375  115217 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:54:43.514571  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .DriverName
	I1206 19:54:43.514716  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetState
	I1206 19:54:43.516245  115217 fix.go:102] recreateIfNeeded on old-k8s-version-448851: state=Stopped err=<nil>
	I1206 19:54:43.516266  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .DriverName
	W1206 19:54:43.516391  115217 fix.go:128] unexpected machine state, will restart: <nil>
	I1206 19:54:43.518413  115217 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-448851" ...
	I1206 19:54:43.495395  115078 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1206 19:54:43.495445  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHHostname
	I1206 19:54:43.497720  115078 machine.go:91] provisioned docker machine in 4m37.37101565s
	I1206 19:54:43.497766  115078 fix.go:56] fixHost completed within 4m37.395231745s
	I1206 19:54:43.497773  115078 start.go:83] releasing machines lock for "no-preload-989559", held for 4m37.395253694s
	W1206 19:54:43.497813  115078 start.go:694] error starting host: provision: host is not running
	W1206 19:54:43.497949  115078 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1206 19:54:43.497960  115078 start.go:709] Will try again in 5 seconds ...
	I1206 19:54:43.519752  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .Start
	I1206 19:54:43.519905  115217 main.go:141] libmachine: (old-k8s-version-448851) Ensuring networks are active...
	I1206 19:54:43.520785  115217 main.go:141] libmachine: (old-k8s-version-448851) Ensuring network default is active
	I1206 19:54:43.521155  115217 main.go:141] libmachine: (old-k8s-version-448851) Ensuring network mk-old-k8s-version-448851 is active
	I1206 19:54:43.521477  115217 main.go:141] libmachine: (old-k8s-version-448851) Getting domain xml...
	I1206 19:54:43.522123  115217 main.go:141] libmachine: (old-k8s-version-448851) Creating domain...
	I1206 19:54:44.758967  115217 main.go:141] libmachine: (old-k8s-version-448851) Waiting to get IP...
	I1206 19:54:44.759812  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:54:44.760194  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | unable to find current IP address of domain old-k8s-version-448851 in network mk-old-k8s-version-448851
	I1206 19:54:44.760255  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | I1206 19:54:44.760156  116186 retry.go:31] will retry after 298.997725ms: waiting for machine to come up
	I1206 19:54:45.061071  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:54:45.061521  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | unable to find current IP address of domain old-k8s-version-448851 in network mk-old-k8s-version-448851
	I1206 19:54:45.061545  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | I1206 19:54:45.061474  116186 retry.go:31] will retry after 338.263286ms: waiting for machine to come up
	I1206 19:54:45.401161  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:54:45.401614  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | unable to find current IP address of domain old-k8s-version-448851 in network mk-old-k8s-version-448851
	I1206 19:54:45.401641  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | I1206 19:54:45.401572  116186 retry.go:31] will retry after 468.987471ms: waiting for machine to come up
	I1206 19:54:45.872203  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:54:45.872644  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | unable to find current IP address of domain old-k8s-version-448851 in network mk-old-k8s-version-448851
	I1206 19:54:45.872675  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | I1206 19:54:45.872586  116186 retry.go:31] will retry after 447.252306ms: waiting for machine to come up
	I1206 19:54:46.321277  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:54:46.321583  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | unable to find current IP address of domain old-k8s-version-448851 in network mk-old-k8s-version-448851
	I1206 19:54:46.321609  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | I1206 19:54:46.321549  116186 retry.go:31] will retry after 591.206607ms: waiting for machine to come up
	I1206 19:54:46.913936  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:54:46.914351  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | unable to find current IP address of domain old-k8s-version-448851 in network mk-old-k8s-version-448851
	I1206 19:54:46.914412  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | I1206 19:54:46.914260  116186 retry.go:31] will retry after 888.979547ms: waiting for machine to come up
	I1206 19:54:47.805332  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:54:47.805783  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | unable to find current IP address of domain old-k8s-version-448851 in network mk-old-k8s-version-448851
	I1206 19:54:47.805814  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | I1206 19:54:47.805722  116186 retry.go:31] will retry after 1.088490978s: waiting for machine to come up
	I1206 19:54:48.499603  115078 start.go:365] acquiring machines lock for no-preload-989559: {Name:mk49ce640266d8c664a871ed4989f65c26b6fae1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1206 19:54:48.895892  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:54:48.896316  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | unable to find current IP address of domain old-k8s-version-448851 in network mk-old-k8s-version-448851
	I1206 19:54:48.896347  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | I1206 19:54:48.896249  116186 retry.go:31] will retry after 1.145932913s: waiting for machine to come up
	I1206 19:54:50.043740  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:54:50.044169  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | unable to find current IP address of domain old-k8s-version-448851 in network mk-old-k8s-version-448851
	I1206 19:54:50.044199  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | I1206 19:54:50.044136  116186 retry.go:31] will retry after 1.302468984s: waiting for machine to come up
	I1206 19:54:51.347696  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:54:51.348093  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | unable to find current IP address of domain old-k8s-version-448851 in network mk-old-k8s-version-448851
	I1206 19:54:51.348124  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | I1206 19:54:51.348039  116186 retry.go:31] will retry after 2.099836852s: waiting for machine to come up
	I1206 19:54:53.450166  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:54:53.450638  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | unable to find current IP address of domain old-k8s-version-448851 in network mk-old-k8s-version-448851
	I1206 19:54:53.450678  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | I1206 19:54:53.450566  116186 retry.go:31] will retry after 1.877757048s: waiting for machine to come up
	I1206 19:54:55.331257  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:54:55.331697  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | unable to find current IP address of domain old-k8s-version-448851 in network mk-old-k8s-version-448851
	I1206 19:54:55.331752  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | I1206 19:54:55.331671  116186 retry.go:31] will retry after 3.399849348s: waiting for machine to come up
	I1206 19:54:58.733325  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:54:58.733712  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | unable to find current IP address of domain old-k8s-version-448851 in network mk-old-k8s-version-448851
	I1206 19:54:58.733736  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | I1206 19:54:58.733664  116186 retry.go:31] will retry after 4.308323214s: waiting for machine to come up
	I1206 19:55:04.350333  115497 start.go:369] acquired machines lock for "default-k8s-diff-port-380424" in 4m17.791271724s
	I1206 19:55:04.350411  115497 start.go:96] Skipping create...Using existing machine configuration
	I1206 19:55:04.350426  115497 fix.go:54] fixHost starting: 
	I1206 19:55:04.350878  115497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:55:04.350927  115497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:55:04.367462  115497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36653
	I1206 19:55:04.367935  115497 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:55:04.368546  115497 main.go:141] libmachine: Using API Version  1
	I1206 19:55:04.368580  115497 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:55:04.368972  115497 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:55:04.369197  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .DriverName
	I1206 19:55:04.369417  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetState
	I1206 19:55:04.370940  115497 fix.go:102] recreateIfNeeded on default-k8s-diff-port-380424: state=Stopped err=<nil>
	I1206 19:55:04.370982  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .DriverName
	W1206 19:55:04.371135  115497 fix.go:128] unexpected machine state, will restart: <nil>
	I1206 19:55:04.373809  115497 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-380424" ...
	I1206 19:55:03.047055  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.047484  115217 main.go:141] libmachine: (old-k8s-version-448851) Found IP for machine: 192.168.61.33
	I1206 19:55:03.047516  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has current primary IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.047527  115217 main.go:141] libmachine: (old-k8s-version-448851) Reserving static IP address...
	I1206 19:55:03.048083  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "old-k8s-version-448851", mac: "52:54:00:91:ad:26", ip: "192.168.61.33"} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:03.048116  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | skip adding static IP to network mk-old-k8s-version-448851 - found existing host DHCP lease matching {name: "old-k8s-version-448851", mac: "52:54:00:91:ad:26", ip: "192.168.61.33"}
	I1206 19:55:03.048135  115217 main.go:141] libmachine: (old-k8s-version-448851) Reserved static IP address: 192.168.61.33
	I1206 19:55:03.048146  115217 main.go:141] libmachine: (old-k8s-version-448851) Waiting for SSH to be available...
	I1206 19:55:03.048158  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | Getting to WaitForSSH function...
	I1206 19:55:03.050347  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.050661  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:03.050682  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.050793  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | Using SSH client type: external
	I1206 19:55:03.050872  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | Using SSH private key: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/old-k8s-version-448851/id_rsa (-rw-------)
	I1206 19:55:03.050913  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.33 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17740-63652/.minikube/machines/old-k8s-version-448851/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1206 19:55:03.050935  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | About to run SSH command:
	I1206 19:55:03.050956  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | exit 0
	I1206 19:55:03.137326  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | SSH cmd err, output: <nil>: 
	I1206 19:55:03.137753  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetConfigRaw
	I1206 19:55:03.138415  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetIP
	I1206 19:55:03.140903  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.141314  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:03.141341  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.141671  115217 profile.go:148] Saving config to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/old-k8s-version-448851/config.json ...
	I1206 19:55:03.141899  115217 machine.go:88] provisioning docker machine ...
	I1206 19:55:03.141924  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .DriverName
	I1206 19:55:03.142133  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetMachineName
	I1206 19:55:03.142284  115217 buildroot.go:166] provisioning hostname "old-k8s-version-448851"
	I1206 19:55:03.142305  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetMachineName
	I1206 19:55:03.142511  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHHostname
	I1206 19:55:03.144778  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.145119  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:03.145144  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.145289  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHPort
	I1206 19:55:03.145451  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 19:55:03.145582  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 19:55:03.145705  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHUsername
	I1206 19:55:03.145829  115217 main.go:141] libmachine: Using SSH client type: native
	I1206 19:55:03.146319  115217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.33 22 <nil> <nil>}
	I1206 19:55:03.146343  115217 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-448851 && echo "old-k8s-version-448851" | sudo tee /etc/hostname
	I1206 19:55:03.270447  115217 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-448851
	
	I1206 19:55:03.270490  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHHostname
	I1206 19:55:03.273453  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.273769  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:03.273802  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.273957  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHPort
	I1206 19:55:03.274148  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 19:55:03.274326  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 19:55:03.274426  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHUsername
	I1206 19:55:03.274576  115217 main.go:141] libmachine: Using SSH client type: native
	I1206 19:55:03.274893  115217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.33 22 <nil> <nil>}
	I1206 19:55:03.274910  115217 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-448851' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-448851/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-448851' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 19:55:03.395200  115217 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1206 19:55:03.395232  115217 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17740-63652/.minikube CaCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17740-63652/.minikube}
	I1206 19:55:03.395281  115217 buildroot.go:174] setting up certificates
	I1206 19:55:03.395298  115217 provision.go:83] configureAuth start
	I1206 19:55:03.395320  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetMachineName
	I1206 19:55:03.395585  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetIP
	I1206 19:55:03.397989  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.398373  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:03.398405  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.398547  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHHostname
	I1206 19:55:03.400869  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.401196  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:03.401223  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.401369  115217 provision.go:138] copyHostCerts
	I1206 19:55:03.401492  115217 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem, removing ...
	I1206 19:55:03.401513  115217 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem
	I1206 19:55:03.401600  115217 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem (1082 bytes)
	I1206 19:55:03.401718  115217 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem, removing ...
	I1206 19:55:03.401730  115217 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem
	I1206 19:55:03.401778  115217 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem (1123 bytes)
	I1206 19:55:03.401857  115217 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem, removing ...
	I1206 19:55:03.401867  115217 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem
	I1206 19:55:03.401899  115217 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem (1679 bytes)
	I1206 19:55:03.401971  115217 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-448851 san=[192.168.61.33 192.168.61.33 localhost 127.0.0.1 minikube old-k8s-version-448851]
	I1206 19:55:03.655010  115217 provision.go:172] copyRemoteCerts
	I1206 19:55:03.655082  115217 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 19:55:03.655110  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHHostname
	I1206 19:55:03.657860  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.658301  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:03.658336  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.658529  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHPort
	I1206 19:55:03.658738  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 19:55:03.658914  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHUsername
	I1206 19:55:03.659068  115217 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/old-k8s-version-448851/id_rsa Username:docker}
	I1206 19:55:03.742021  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 19:55:03.765284  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1206 19:55:03.788562  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 19:55:03.811692  115217 provision.go:86] duration metric: configureAuth took 416.376347ms
	I1206 19:55:03.811722  115217 buildroot.go:189] setting minikube options for container-runtime
	I1206 19:55:03.811943  115217 config.go:182] Loaded profile config "old-k8s-version-448851": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1206 19:55:03.812058  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHHostname
	I1206 19:55:03.814518  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.814898  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:03.814934  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:03.815149  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHPort
	I1206 19:55:03.815371  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 19:55:03.815541  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 19:55:03.815663  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHUsername
	I1206 19:55:03.815787  115217 main.go:141] libmachine: Using SSH client type: native
	I1206 19:55:03.816094  115217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.33 22 <nil> <nil>}
	I1206 19:55:03.816121  115217 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 19:55:04.115752  115217 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 19:55:04.115780  115217 machine.go:91] provisioned docker machine in 973.864642ms
	I1206 19:55:04.115790  115217 start.go:300] post-start starting for "old-k8s-version-448851" (driver="kvm2")
	I1206 19:55:04.115802  115217 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 19:55:04.115825  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .DriverName
	I1206 19:55:04.116197  115217 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 19:55:04.116226  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHHostname
	I1206 19:55:04.119234  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:04.119559  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:04.119586  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:04.119801  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHPort
	I1206 19:55:04.120047  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 19:55:04.120228  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHUsername
	I1206 19:55:04.120391  115217 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/old-k8s-version-448851/id_rsa Username:docker}
	I1206 19:55:04.203195  115217 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 19:55:04.207210  115217 info.go:137] Remote host: Buildroot 2021.02.12
	I1206 19:55:04.207238  115217 filesync.go:126] Scanning /home/jenkins/minikube-integration/17740-63652/.minikube/addons for local assets ...
	I1206 19:55:04.207315  115217 filesync.go:126] Scanning /home/jenkins/minikube-integration/17740-63652/.minikube/files for local assets ...
	I1206 19:55:04.207392  115217 filesync.go:149] local asset: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem -> 708342.pem in /etc/ssl/certs
	I1206 19:55:04.207475  115217 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 19:55:04.215469  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem --> /etc/ssl/certs/708342.pem (1708 bytes)
	I1206 19:55:04.238407  115217 start.go:303] post-start completed in 122.598676ms
	I1206 19:55:04.238437  115217 fix.go:56] fixHost completed within 20.740486511s
	I1206 19:55:04.238467  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHHostname
	I1206 19:55:04.241147  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:04.241522  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:04.241558  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:04.241720  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHPort
	I1206 19:55:04.241992  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 19:55:04.242187  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 19:55:04.242346  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHUsername
	I1206 19:55:04.242488  115217 main.go:141] libmachine: Using SSH client type: native
	I1206 19:55:04.242801  115217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.33 22 <nil> <nil>}
	I1206 19:55:04.242813  115217 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1206 19:55:04.350154  115217 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701892504.298339573
	
	I1206 19:55:04.350177  115217 fix.go:206] guest clock: 1701892504.298339573
	I1206 19:55:04.350185  115217 fix.go:219] Guest: 2023-12-06 19:55:04.298339573 +0000 UTC Remote: 2023-12-06 19:55:04.238442081 +0000 UTC m=+286.264851054 (delta=59.897492ms)
	I1206 19:55:04.350206  115217 fix.go:190] guest clock delta is within tolerance: 59.897492ms
	I1206 19:55:04.350212  115217 start.go:83] releasing machines lock for "old-k8s-version-448851", held for 20.852295937s
	I1206 19:55:04.350240  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .DriverName
	I1206 19:55:04.350562  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetIP
	I1206 19:55:04.353070  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:04.353519  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:04.353547  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:04.353732  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .DriverName
	I1206 19:55:04.354331  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .DriverName
	I1206 19:55:04.354552  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .DriverName
	I1206 19:55:04.354641  115217 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 19:55:04.354689  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHHostname
	I1206 19:55:04.354815  115217 ssh_runner.go:195] Run: cat /version.json
	I1206 19:55:04.354844  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHHostname
	I1206 19:55:04.357316  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:04.357558  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:04.357703  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:04.357734  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:04.357841  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHPort
	I1206 19:55:04.358006  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:04.358031  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 19:55:04.358052  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:04.358161  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHPort
	I1206 19:55:04.358241  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHUsername
	I1206 19:55:04.358322  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 19:55:04.358448  115217 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/old-k8s-version-448851/id_rsa Username:docker}
	I1206 19:55:04.358575  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHUsername
	I1206 19:55:04.358734  115217 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/old-k8s-version-448851/id_rsa Username:docker}
	I1206 19:55:04.469402  115217 ssh_runner.go:195] Run: systemctl --version
	I1206 19:55:04.475231  115217 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 19:55:04.618312  115217 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 19:55:04.625482  115217 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 19:55:04.625557  115217 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 19:55:04.640251  115217 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1206 19:55:04.640281  115217 start.go:475] detecting cgroup driver to use...
	I1206 19:55:04.640368  115217 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 19:55:04.654153  115217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 19:55:04.666295  115217 docker.go:203] disabling cri-docker service (if available) ...
	I1206 19:55:04.666387  115217 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 19:55:04.678579  115217 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 19:55:04.692472  115217 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 19:55:04.793090  115217 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 19:55:04.909331  115217 docker.go:219] disabling docker service ...
	I1206 19:55:04.909399  115217 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 19:55:04.922479  115217 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 19:55:04.934122  115217 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 19:55:05.048844  115217 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 19:55:05.156415  115217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 19:55:05.172525  115217 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 19:55:05.190303  115217 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1206 19:55:05.190363  115217 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:55:05.199967  115217 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 19:55:05.200048  115217 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:55:05.209725  115217 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:55:05.218770  115217 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:55:05.227835  115217 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 19:55:05.237006  115217 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 19:55:05.244839  115217 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1206 19:55:05.244899  115217 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1206 19:55:05.256528  115217 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 19:55:05.266360  115217 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 19:55:05.387203  115217 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 19:55:05.555553  115217 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 19:55:05.555668  115217 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 19:55:05.564619  115217 start.go:543] Will wait 60s for crictl version
	I1206 19:55:05.564682  115217 ssh_runner.go:195] Run: which crictl
	I1206 19:55:05.568979  115217 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1206 19:55:05.611883  115217 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1206 19:55:05.611986  115217 ssh_runner.go:195] Run: crio --version
	I1206 19:55:05.666757  115217 ssh_runner.go:195] Run: crio --version
	I1206 19:55:05.725942  115217 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1206 19:55:04.375626  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .Start
	I1206 19:55:04.375819  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Ensuring networks are active...
	I1206 19:55:04.376548  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Ensuring network default is active
	I1206 19:55:04.376923  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Ensuring network mk-default-k8s-diff-port-380424 is active
	I1206 19:55:04.377416  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Getting domain xml...
	I1206 19:55:04.378003  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Creating domain...
	I1206 19:55:05.667493  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting to get IP...
	I1206 19:55:05.668629  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:05.669112  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | unable to find current IP address of domain default-k8s-diff-port-380424 in network mk-default-k8s-diff-port-380424
	I1206 19:55:05.669148  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | I1206 19:55:05.669064  116315 retry.go:31] will retry after 259.414087ms: waiting for machine to come up
	I1206 19:55:05.930773  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:05.931201  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | unable to find current IP address of domain default-k8s-diff-port-380424 in network mk-default-k8s-diff-port-380424
	I1206 19:55:05.931232  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | I1206 19:55:05.931129  116315 retry.go:31] will retry after 319.702286ms: waiting for machine to come up
	I1206 19:55:06.252911  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:06.253423  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | unable to find current IP address of domain default-k8s-diff-port-380424 in network mk-default-k8s-diff-port-380424
	I1206 19:55:06.253458  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | I1206 19:55:06.253363  116315 retry.go:31] will retry after 403.286071ms: waiting for machine to come up
	I1206 19:55:05.727444  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetIP
	I1206 19:55:05.730503  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:05.730864  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 19:55:05.730900  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 19:55:05.731151  115217 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1206 19:55:05.735826  115217 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 19:55:05.748254  115217 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1206 19:55:05.748312  115217 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 19:55:05.799380  115217 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1206 19:55:05.799468  115217 ssh_runner.go:195] Run: which lz4
	I1206 19:55:05.803715  115217 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1206 19:55:05.808059  115217 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1206 19:55:05.808093  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1206 19:55:07.624367  115217 crio.go:444] Took 1.820689 seconds to copy over tarball
	I1206 19:55:07.624452  115217 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1206 19:55:06.658075  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:06.658763  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | unable to find current IP address of domain default-k8s-diff-port-380424 in network mk-default-k8s-diff-port-380424
	I1206 19:55:06.658800  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | I1206 19:55:06.658710  116315 retry.go:31] will retry after 572.663186ms: waiting for machine to come up
	I1206 19:55:07.233562  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:07.233898  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | unable to find current IP address of domain default-k8s-diff-port-380424 in network mk-default-k8s-diff-port-380424
	I1206 19:55:07.233927  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | I1206 19:55:07.233861  116315 retry.go:31] will retry after 762.563485ms: waiting for machine to come up
	I1206 19:55:07.997980  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:07.998424  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | unable to find current IP address of domain default-k8s-diff-port-380424 in network mk-default-k8s-diff-port-380424
	I1206 19:55:07.998453  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | I1206 19:55:07.998368  116315 retry.go:31] will retry after 885.694635ms: waiting for machine to come up
	I1206 19:55:08.885521  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:08.885957  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | unable to find current IP address of domain default-k8s-diff-port-380424 in network mk-default-k8s-diff-port-380424
	I1206 19:55:08.885983  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | I1206 19:55:08.885918  116315 retry.go:31] will retry after 924.594214ms: waiting for machine to come up
	I1206 19:55:09.812796  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:09.813271  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | unable to find current IP address of domain default-k8s-diff-port-380424 in network mk-default-k8s-diff-port-380424
	I1206 19:55:09.813305  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | I1206 19:55:09.813205  116315 retry.go:31] will retry after 1.485258028s: waiting for machine to come up
	I1206 19:55:11.300830  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:11.301385  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | unable to find current IP address of domain default-k8s-diff-port-380424 in network mk-default-k8s-diff-port-380424
	I1206 19:55:11.301424  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | I1206 19:55:11.301315  116315 retry.go:31] will retry after 1.232055429s: waiting for machine to come up
	I1206 19:55:10.452537  115217 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.828052972s)
	I1206 19:55:10.452565  115217 crio.go:451] Took 2.828166 seconds to extract the tarball
	I1206 19:55:10.452574  115217 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1206 19:55:10.493620  115217 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 19:55:10.539181  115217 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1206 19:55:10.539218  115217 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1206 19:55:10.539312  115217 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1206 19:55:10.539318  115217 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 19:55:10.539358  115217 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1206 19:55:10.539364  115217 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1206 19:55:10.539515  115217 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1206 19:55:10.539529  115217 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1206 19:55:10.539331  115217 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1206 19:55:10.539572  115217 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1206 19:55:10.540875  115217 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1206 19:55:10.540888  115217 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1206 19:55:10.540931  115217 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1206 19:55:10.540936  115217 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1206 19:55:10.540879  115217 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1206 19:55:10.540875  115217 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1206 19:55:10.540880  115217 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1206 19:55:10.540879  115217 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 19:55:10.725027  115217 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1206 19:55:10.762761  115217 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1206 19:55:10.762810  115217 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1206 19:55:10.762862  115217 ssh_runner.go:195] Run: which crictl
	I1206 19:55:10.763731  115217 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 19:55:10.766312  115217 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1206 19:55:10.768181  115217 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1206 19:55:10.773115  115217 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1206 19:55:10.829543  115217 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1206 19:55:10.841186  115217 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1206 19:55:10.856309  115217 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1206 19:55:10.873212  115217 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1206 19:55:10.983390  115217 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1206 19:55:10.983444  115217 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1206 19:55:10.983463  115217 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1206 19:55:10.983498  115217 ssh_runner.go:195] Run: which crictl
	I1206 19:55:10.983510  115217 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1206 19:55:10.983530  115217 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1206 19:55:10.983564  115217 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1206 19:55:10.983628  115217 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1206 19:55:10.983663  115217 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1206 19:55:10.983672  115217 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1206 19:55:10.983700  115217 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1206 19:55:10.983712  115217 ssh_runner.go:195] Run: which crictl
	I1206 19:55:10.983567  115217 ssh_runner.go:195] Run: which crictl
	I1206 19:55:10.983730  115217 ssh_runner.go:195] Run: which crictl
	I1206 19:55:10.983802  115217 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1206 19:55:10.983829  115217 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1206 19:55:10.983861  115217 ssh_runner.go:195] Run: which crictl
	I1206 19:55:11.009102  115217 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1206 19:55:11.009135  115217 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1206 19:55:11.009152  115217 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1206 19:55:11.009211  115217 ssh_runner.go:195] Run: which crictl
	I1206 19:55:11.009254  115217 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1206 19:55:11.009273  115217 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1206 19:55:11.009307  115217 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1206 19:55:11.009342  115217 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1206 19:55:11.009355  115217 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1206 19:55:11.009388  115217 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1206 19:55:11.009390  115217 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1206 19:55:11.130238  115217 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1206 19:55:11.158336  115217 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1206 19:55:11.158375  115217 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1206 19:55:11.158431  115217 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1206 19:55:11.158438  115217 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1206 19:55:11.158507  115217 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1206 19:55:12.535831  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:12.536331  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | unable to find current IP address of domain default-k8s-diff-port-380424 in network mk-default-k8s-diff-port-380424
	I1206 19:55:12.536374  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | I1206 19:55:12.536253  116315 retry.go:31] will retry after 1.865303927s: waiting for machine to come up
	I1206 19:55:14.402935  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:14.403326  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | unable to find current IP address of domain default-k8s-diff-port-380424 in network mk-default-k8s-diff-port-380424
	I1206 19:55:14.403354  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | I1206 19:55:14.403268  116315 retry.go:31] will retry after 1.960994282s: waiting for machine to come up
	I1206 19:55:16.366289  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:16.366763  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | unable to find current IP address of domain default-k8s-diff-port-380424 in network mk-default-k8s-diff-port-380424
	I1206 19:55:16.366792  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | I1206 19:55:16.366689  116315 retry.go:31] will retry after 2.933451161s: waiting for machine to come up
	I1206 19:55:13.478881  115217 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0: (2.320421557s)
	I1206 19:55:13.478937  115217 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1206 19:55:13.478892  115217 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (2.469478111s)
	I1206 19:55:13.478983  115217 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1206 19:55:13.479043  115217 cache_images.go:92] LoadImages completed in 2.939808867s
	W1206 19:55:13.479149  115217 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0: no such file or directory
	I1206 19:55:13.479228  115217 ssh_runner.go:195] Run: crio config
	I1206 19:55:13.543270  115217 cni.go:84] Creating CNI manager for ""
	I1206 19:55:13.543302  115217 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 19:55:13.543328  115217 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1206 19:55:13.543355  115217 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.33 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-448851 NodeName:old-k8s-version-448851 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.33"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.33 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1206 19:55:13.543557  115217 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.33
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-448851"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.33
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.33"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-448851
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.61.33:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 19:55:13.543700  115217 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-448851 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.33
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-448851 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1206 19:55:13.543776  115217 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1206 19:55:13.554524  115217 binaries.go:44] Found k8s binaries, skipping transfer
	I1206 19:55:13.554611  115217 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 19:55:13.566752  115217 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I1206 19:55:13.586027  115217 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 19:55:13.603800  115217 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I1206 19:55:13.627098  115217 ssh_runner.go:195] Run: grep 192.168.61.33	control-plane.minikube.internal$ /etc/hosts
	I1206 19:55:13.632470  115217 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.33	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 19:55:13.651452  115217 certs.go:56] Setting up /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/old-k8s-version-448851 for IP: 192.168.61.33
	I1206 19:55:13.651507  115217 certs.go:190] acquiring lock for shared ca certs: {Name:mkf8fbf7b590617ef4dc6c3a4acb742ae26f89ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 19:55:13.651670  115217 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.key
	I1206 19:55:13.651748  115217 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.key
	I1206 19:55:13.651860  115217 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/old-k8s-version-448851/client.key
	I1206 19:55:13.651932  115217 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/old-k8s-version-448851/apiserver.key.efa8c2ad
	I1206 19:55:13.651994  115217 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/old-k8s-version-448851/proxy-client.key
	I1206 19:55:13.652142  115217 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834.pem (1338 bytes)
	W1206 19:55:13.652183  115217 certs.go:433] ignoring /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834_empty.pem, impossibly tiny 0 bytes
	I1206 19:55:13.652201  115217 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 19:55:13.652241  115217 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem (1082 bytes)
	I1206 19:55:13.652283  115217 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem (1123 bytes)
	I1206 19:55:13.652326  115217 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem (1679 bytes)
	I1206 19:55:13.652389  115217 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem (1708 bytes)
	I1206 19:55:13.653344  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/old-k8s-version-448851/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1206 19:55:13.687786  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/old-k8s-version-448851/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 19:55:13.723604  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/old-k8s-version-448851/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 19:55:13.756434  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/old-k8s-version-448851/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 19:55:13.789066  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 19:55:13.821087  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 19:55:13.849840  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 19:55:13.876520  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 19:55:13.901763  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem --> /usr/share/ca-certificates/708342.pem (1708 bytes)
	I1206 19:55:13.932106  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 19:55:13.961708  115217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834.pem --> /usr/share/ca-certificates/70834.pem (1338 bytes)
	I1206 19:55:13.991586  115217 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 19:55:14.009848  115217 ssh_runner.go:195] Run: openssl version
	I1206 19:55:14.017661  115217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/708342.pem && ln -fs /usr/share/ca-certificates/708342.pem /etc/ssl/certs/708342.pem"
	I1206 19:55:14.031103  115217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/708342.pem
	I1206 19:55:14.037142  115217 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  6 18:50 /usr/share/ca-certificates/708342.pem
	I1206 19:55:14.037212  115217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/708342.pem
	I1206 19:55:14.044737  115217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/708342.pem /etc/ssl/certs/3ec20f2e.0"
	I1206 19:55:14.058296  115217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1206 19:55:14.068591  115217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:55:14.073995  115217 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  6 18:41 /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:55:14.074067  115217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:55:14.079922  115217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1206 19:55:14.090541  115217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/70834.pem && ln -fs /usr/share/ca-certificates/70834.pem /etc/ssl/certs/70834.pem"
	I1206 19:55:14.100915  115217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/70834.pem
	I1206 19:55:14.106692  115217 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  6 18:50 /usr/share/ca-certificates/70834.pem
	I1206 19:55:14.106766  115217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/70834.pem
	I1206 19:55:14.112592  115217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/70834.pem /etc/ssl/certs/51391683.0"
	I1206 19:55:14.122630  115217 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1206 19:55:14.128544  115217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 19:55:14.136649  115217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 19:55:14.143060  115217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 19:55:14.151002  115217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 19:55:14.157202  115217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 19:55:14.163456  115217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 19:55:14.171607  115217 kubeadm.go:404] StartCluster: {Name:old-k8s-version-448851 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-448851 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.33 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 19:55:14.171720  115217 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 19:55:14.171771  115217 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 19:55:14.216630  115217 cri.go:89] found id: ""
	I1206 19:55:14.216712  115217 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 19:55:14.229800  115217 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1206 19:55:14.229832  115217 kubeadm.go:636] restartCluster start
	I1206 19:55:14.229889  115217 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 19:55:14.242347  115217 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:14.243973  115217 kubeconfig.go:92] found "old-k8s-version-448851" server: "https://192.168.61.33:8443"
	I1206 19:55:14.247781  115217 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 19:55:14.257060  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:14.257118  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:14.268619  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:14.268644  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:14.268692  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:14.279803  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:14.780509  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:14.780603  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:14.796116  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:15.280797  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:15.280910  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:15.296260  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:15.779895  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:15.780023  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:15.796115  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:16.280792  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:16.280884  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:16.297258  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:16.780884  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:16.781007  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:16.796430  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:17.279982  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:17.280088  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:17.291102  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:17.780721  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:17.780865  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:17.792253  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:19.302288  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:19.302717  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | unable to find current IP address of domain default-k8s-diff-port-380424 in network mk-default-k8s-diff-port-380424
	I1206 19:55:19.302744  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | I1206 19:55:19.302670  116315 retry.go:31] will retry after 3.226665023s: waiting for machine to come up
	I1206 19:55:18.280684  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:18.280777  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:18.292535  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:18.780650  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:18.780722  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:18.793872  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:19.280431  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:19.280507  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:19.292188  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:19.780793  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:19.780914  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:19.791873  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:20.280527  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:20.280637  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:20.291886  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:20.780810  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:20.780890  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:20.791837  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:21.280389  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:21.280479  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:21.291743  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:21.780252  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:21.780343  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:21.791452  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:22.280013  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:22.280120  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:22.291240  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:22.780451  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:22.780528  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:22.791668  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:23.690245  115591 start.go:369] acquired machines lock for "embed-certs-209025" in 4m34.06740814s
	I1206 19:55:23.690318  115591 start.go:96] Skipping create...Using existing machine configuration
	I1206 19:55:23.690327  115591 fix.go:54] fixHost starting: 
	I1206 19:55:23.690686  115591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:55:23.690728  115591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:55:23.706483  115591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35135
	I1206 19:55:23.706891  115591 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:55:23.707367  115591 main.go:141] libmachine: Using API Version  1
	I1206 19:55:23.707391  115591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:55:23.707744  115591 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:55:23.707925  115591 main.go:141] libmachine: (embed-certs-209025) Calling .DriverName
	I1206 19:55:23.708059  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetState
	I1206 19:55:23.709586  115591 fix.go:102] recreateIfNeeded on embed-certs-209025: state=Stopped err=<nil>
	I1206 19:55:23.709612  115591 main.go:141] libmachine: (embed-certs-209025) Calling .DriverName
	W1206 19:55:23.709803  115591 fix.go:128] unexpected machine state, will restart: <nil>
	I1206 19:55:23.712015  115591 out.go:177] * Restarting existing kvm2 VM for "embed-certs-209025" ...
	I1206 19:55:23.713472  115591 main.go:141] libmachine: (embed-certs-209025) Calling .Start
	I1206 19:55:23.713637  115591 main.go:141] libmachine: (embed-certs-209025) Ensuring networks are active...
	I1206 19:55:23.714335  115591 main.go:141] libmachine: (embed-certs-209025) Ensuring network default is active
	I1206 19:55:23.714639  115591 main.go:141] libmachine: (embed-certs-209025) Ensuring network mk-embed-certs-209025 is active
	I1206 19:55:23.714978  115591 main.go:141] libmachine: (embed-certs-209025) Getting domain xml...
	I1206 19:55:23.715617  115591 main.go:141] libmachine: (embed-certs-209025) Creating domain...
	I1206 19:55:22.530618  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.531092  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has current primary IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.531107  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Found IP for machine: 192.168.72.22
	I1206 19:55:22.531117  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Reserving static IP address...
	I1206 19:55:22.531437  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-380424", mac: "52:54:00:15:24:2b", ip: "192.168.72.22"} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:22.531465  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | skip adding static IP to network mk-default-k8s-diff-port-380424 - found existing host DHCP lease matching {name: "default-k8s-diff-port-380424", mac: "52:54:00:15:24:2b", ip: "192.168.72.22"}
	I1206 19:55:22.531485  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | Getting to WaitForSSH function...
	I1206 19:55:22.531496  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Reserved static IP address: 192.168.72.22
	I1206 19:55:22.531554  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Waiting for SSH to be available...
	I1206 19:55:22.533485  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.533729  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:22.533752  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.533853  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | Using SSH client type: external
	I1206 19:55:22.533880  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | Using SSH private key: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/default-k8s-diff-port-380424/id_rsa (-rw-------)
	I1206 19:55:22.533916  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.22 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17740-63652/.minikube/machines/default-k8s-diff-port-380424/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1206 19:55:22.533941  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | About to run SSH command:
	I1206 19:55:22.533957  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | exit 0
	I1206 19:55:22.620864  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | SSH cmd err, output: <nil>: 
	I1206 19:55:22.621194  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetConfigRaw
	I1206 19:55:22.621844  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetIP
	I1206 19:55:22.624194  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.624565  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:22.624599  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.624876  115497 profile.go:148] Saving config to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/default-k8s-diff-port-380424/config.json ...
	I1206 19:55:22.625062  115497 machine.go:88] provisioning docker machine ...
	I1206 19:55:22.625081  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .DriverName
	I1206 19:55:22.625310  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetMachineName
	I1206 19:55:22.625481  115497 buildroot.go:166] provisioning hostname "default-k8s-diff-port-380424"
	I1206 19:55:22.625502  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetMachineName
	I1206 19:55:22.625635  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHHostname
	I1206 19:55:22.627886  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.628227  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:22.628255  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.628352  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHPort
	I1206 19:55:22.628499  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 19:55:22.628658  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 19:55:22.628784  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHUsername
	I1206 19:55:22.628940  115497 main.go:141] libmachine: Using SSH client type: native
	I1206 19:55:22.629440  115497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.22 22 <nil> <nil>}
	I1206 19:55:22.629462  115497 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-380424 && echo "default-k8s-diff-port-380424" | sudo tee /etc/hostname
	I1206 19:55:22.753829  115497 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-380424
	
	I1206 19:55:22.753867  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHHostname
	I1206 19:55:22.756620  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.756958  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:22.757001  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.757129  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHPort
	I1206 19:55:22.757375  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 19:55:22.757558  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 19:55:22.757700  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHUsername
	I1206 19:55:22.757868  115497 main.go:141] libmachine: Using SSH client type: native
	I1206 19:55:22.758197  115497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.22 22 <nil> <nil>}
	I1206 19:55:22.758252  115497 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-380424' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-380424/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-380424' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 19:55:22.878138  115497 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1206 19:55:22.878175  115497 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17740-63652/.minikube CaCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17740-63652/.minikube}
	I1206 19:55:22.878202  115497 buildroot.go:174] setting up certificates
	I1206 19:55:22.878246  115497 provision.go:83] configureAuth start
	I1206 19:55:22.878259  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetMachineName
	I1206 19:55:22.878557  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetIP
	I1206 19:55:22.881145  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.881515  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:22.881547  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.881657  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHHostname
	I1206 19:55:22.883591  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.883943  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:22.883981  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.884062  115497 provision.go:138] copyHostCerts
	I1206 19:55:22.884122  115497 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem, removing ...
	I1206 19:55:22.884135  115497 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem
	I1206 19:55:22.884203  115497 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem (1082 bytes)
	I1206 19:55:22.884334  115497 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem, removing ...
	I1206 19:55:22.884346  115497 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem
	I1206 19:55:22.884375  115497 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem (1123 bytes)
	I1206 19:55:22.884446  115497 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem, removing ...
	I1206 19:55:22.884457  115497 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem
	I1206 19:55:22.884484  115497 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem (1679 bytes)
	I1206 19:55:22.884539  115497 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-380424 san=[192.168.72.22 192.168.72.22 localhost 127.0.0.1 minikube default-k8s-diff-port-380424]
	I1206 19:55:22.973559  115497 provision.go:172] copyRemoteCerts
	I1206 19:55:22.973627  115497 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 19:55:22.973660  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHHostname
	I1206 19:55:22.976374  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.976656  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:22.976695  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:22.976888  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHPort
	I1206 19:55:22.977068  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 19:55:22.977300  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHUsername
	I1206 19:55:22.977468  115497 sshutil.go:53] new ssh client: &{IP:192.168.72.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/default-k8s-diff-port-380424/id_rsa Username:docker}
	I1206 19:55:23.061925  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 19:55:23.085093  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1206 19:55:23.108283  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 19:55:23.131666  115497 provision.go:86] duration metric: configureAuth took 253.404471ms
	I1206 19:55:23.131701  115497 buildroot.go:189] setting minikube options for container-runtime
	I1206 19:55:23.131879  115497 config.go:182] Loaded profile config "default-k8s-diff-port-380424": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 19:55:23.131957  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHHostname
	I1206 19:55:23.134672  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:23.135033  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:23.135077  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:23.135214  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHPort
	I1206 19:55:23.135436  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 19:55:23.135622  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 19:55:23.135781  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHUsername
	I1206 19:55:23.135941  115497 main.go:141] libmachine: Using SSH client type: native
	I1206 19:55:23.136393  115497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.22 22 <nil> <nil>}
	I1206 19:55:23.136427  115497 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 19:55:23.445361  115497 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 19:55:23.445389  115497 machine.go:91] provisioned docker machine in 820.312346ms
	I1206 19:55:23.445404  115497 start.go:300] post-start starting for "default-k8s-diff-port-380424" (driver="kvm2")
	I1206 19:55:23.445418  115497 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 19:55:23.445457  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .DriverName
	I1206 19:55:23.445851  115497 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 19:55:23.445886  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHHostname
	I1206 19:55:23.448493  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:23.448851  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:23.448879  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:23.449021  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHPort
	I1206 19:55:23.449210  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 19:55:23.449408  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHUsername
	I1206 19:55:23.449562  115497 sshutil.go:53] new ssh client: &{IP:192.168.72.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/default-k8s-diff-port-380424/id_rsa Username:docker}
	I1206 19:55:23.535493  115497 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 19:55:23.539696  115497 info.go:137] Remote host: Buildroot 2021.02.12
	I1206 19:55:23.539718  115497 filesync.go:126] Scanning /home/jenkins/minikube-integration/17740-63652/.minikube/addons for local assets ...
	I1206 19:55:23.539780  115497 filesync.go:126] Scanning /home/jenkins/minikube-integration/17740-63652/.minikube/files for local assets ...
	I1206 19:55:23.539862  115497 filesync.go:149] local asset: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem -> 708342.pem in /etc/ssl/certs
	I1206 19:55:23.539968  115497 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 19:55:23.548629  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem --> /etc/ssl/certs/708342.pem (1708 bytes)
	I1206 19:55:23.572264  115497 start.go:303] post-start completed in 126.842848ms
	I1206 19:55:23.572287  115497 fix.go:56] fixHost completed within 19.221864403s
	I1206 19:55:23.572321  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHHostname
	I1206 19:55:23.575329  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:23.575695  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:23.575739  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:23.575890  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHPort
	I1206 19:55:23.576093  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 19:55:23.576272  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 19:55:23.576429  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHUsername
	I1206 19:55:23.576599  115497 main.go:141] libmachine: Using SSH client type: native
	I1206 19:55:23.577046  115497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.22 22 <nil> <nil>}
	I1206 19:55:23.577064  115497 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1206 19:55:23.690035  115497 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701892523.637580982
	
	I1206 19:55:23.690064  115497 fix.go:206] guest clock: 1701892523.637580982
	I1206 19:55:23.690084  115497 fix.go:219] Guest: 2023-12-06 19:55:23.637580982 +0000 UTC Remote: 2023-12-06 19:55:23.572291664 +0000 UTC m=+277.181979500 (delta=65.289318ms)
	I1206 19:55:23.690146  115497 fix.go:190] guest clock delta is within tolerance: 65.289318ms
	I1206 19:55:23.690159  115497 start.go:83] releasing machines lock for "default-k8s-diff-port-380424", held for 19.339778523s
	I1206 19:55:23.690192  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .DriverName
	I1206 19:55:23.690465  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetIP
	I1206 19:55:23.692996  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:23.693337  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:23.693369  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:23.693562  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .DriverName
	I1206 19:55:23.694057  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .DriverName
	I1206 19:55:23.694250  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .DriverName
	I1206 19:55:23.694336  115497 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 19:55:23.694390  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHHostname
	I1206 19:55:23.694463  115497 ssh_runner.go:195] Run: cat /version.json
	I1206 19:55:23.694486  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHHostname
	I1206 19:55:23.696938  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:23.697063  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:23.697363  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:23.697393  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:23.697473  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:23.697514  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHPort
	I1206 19:55:23.697593  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:23.697674  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 19:55:23.697675  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHPort
	I1206 19:55:23.697876  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHUsername
	I1206 19:55:23.697899  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 19:55:23.698044  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHUsername
	I1206 19:55:23.698038  115497 sshutil.go:53] new ssh client: &{IP:192.168.72.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/default-k8s-diff-port-380424/id_rsa Username:docker}
	I1206 19:55:23.698167  115497 sshutil.go:53] new ssh client: &{IP:192.168.72.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/default-k8s-diff-port-380424/id_rsa Username:docker}
	I1206 19:55:23.786973  115497 ssh_runner.go:195] Run: systemctl --version
	I1206 19:55:23.814262  115497 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 19:55:23.954235  115497 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 19:55:23.961434  115497 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 19:55:23.961523  115497 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 19:55:23.981459  115497 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1206 19:55:23.981488  115497 start.go:475] detecting cgroup driver to use...
	I1206 19:55:23.981550  115497 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 19:55:24.000294  115497 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 19:55:24.013738  115497 docker.go:203] disabling cri-docker service (if available) ...
	I1206 19:55:24.013799  115497 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 19:55:24.030844  115497 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 19:55:24.044583  115497 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 19:55:24.161979  115497 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 19:55:24.296507  115497 docker.go:219] disabling docker service ...
	I1206 19:55:24.296580  115497 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 19:55:24.311171  115497 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 19:55:24.323538  115497 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 19:55:24.440425  115497 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 19:55:24.570168  115497 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 19:55:24.583169  115497 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 19:55:24.600733  115497 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1206 19:55:24.600790  115497 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:55:24.610057  115497 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 19:55:24.610129  115497 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:55:24.621925  115497 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:55:24.631383  115497 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:55:24.640414  115497 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 19:55:24.649853  115497 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 19:55:24.657999  115497 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1206 19:55:24.658052  115497 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1206 19:55:24.672821  115497 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 19:55:24.681200  115497 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 19:55:24.812790  115497 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 19:55:24.989383  115497 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 19:55:24.989483  115497 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 19:55:24.995335  115497 start.go:543] Will wait 60s for crictl version
	I1206 19:55:24.995404  115497 ssh_runner.go:195] Run: which crictl
	I1206 19:55:24.999307  115497 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1206 19:55:25.038932  115497 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1206 19:55:25.039046  115497 ssh_runner.go:195] Run: crio --version
	I1206 19:55:25.085844  115497 ssh_runner.go:195] Run: crio --version
	I1206 19:55:25.148264  115497 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1206 19:55:25.149676  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetIP
	I1206 19:55:25.152759  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:25.153157  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 19:55:25.153201  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 19:55:25.153451  115497 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1206 19:55:25.157621  115497 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 19:55:25.173609  115497 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1206 19:55:25.173680  115497 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 19:55:25.223564  115497 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1206 19:55:25.223647  115497 ssh_runner.go:195] Run: which lz4
	I1206 19:55:25.228720  115497 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1206 19:55:25.234028  115497 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1206 19:55:25.234061  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1206 19:55:23.280317  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:23.280398  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:23.291959  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:23.780005  115217 api_server.go:166] Checking apiserver status ...
	I1206 19:55:23.780086  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:23.794371  115217 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:24.257148  115217 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1206 19:55:24.257182  115217 kubeadm.go:1135] stopping kube-system containers ...
	I1206 19:55:24.257196  115217 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1206 19:55:24.257291  115217 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 19:55:24.300759  115217 cri.go:89] found id: ""
	I1206 19:55:24.300832  115217 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1206 19:55:24.319509  115217 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 19:55:24.329215  115217 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 19:55:24.329310  115217 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 19:55:24.338150  115217 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1206 19:55:24.338187  115217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:55:24.490031  115217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:55:25.123737  115217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:55:25.359750  115217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:55:25.550542  115217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:55:25.697003  115217 api_server.go:52] waiting for apiserver process to appear ...
	I1206 19:55:25.697091  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:55:25.713836  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:55:26.231509  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:55:26.730965  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:55:27.231602  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:55:27.731612  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:55:27.763155  115217 api_server.go:72] duration metric: took 2.066152846s to wait for apiserver process to appear ...
	I1206 19:55:27.763181  115217 api_server.go:88] waiting for apiserver healthz status ...
	I1206 19:55:27.763200  115217 api_server.go:253] Checking apiserver healthz at https://192.168.61.33:8443/healthz ...
	I1206 19:55:25.055509  115591 main.go:141] libmachine: (embed-certs-209025) Waiting to get IP...
	I1206 19:55:25.056687  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:25.057138  115591 main.go:141] libmachine: (embed-certs-209025) DBG | unable to find current IP address of domain embed-certs-209025 in network mk-embed-certs-209025
	I1206 19:55:25.057192  115591 main.go:141] libmachine: (embed-certs-209025) DBG | I1206 19:55:25.057100  116938 retry.go:31] will retry after 304.168381ms: waiting for machine to come up
	I1206 19:55:25.363765  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:25.364265  115591 main.go:141] libmachine: (embed-certs-209025) DBG | unable to find current IP address of domain embed-certs-209025 in network mk-embed-certs-209025
	I1206 19:55:25.364404  115591 main.go:141] libmachine: (embed-certs-209025) DBG | I1206 19:55:25.364341  116938 retry.go:31] will retry after 351.729741ms: waiting for machine to come up
	I1206 19:55:25.718184  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:25.718746  115591 main.go:141] libmachine: (embed-certs-209025) DBG | unable to find current IP address of domain embed-certs-209025 in network mk-embed-certs-209025
	I1206 19:55:25.718774  115591 main.go:141] libmachine: (embed-certs-209025) DBG | I1206 19:55:25.718650  116938 retry.go:31] will retry after 340.321802ms: waiting for machine to come up
	I1206 19:55:26.060168  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:26.060796  115591 main.go:141] libmachine: (embed-certs-209025) DBG | unable to find current IP address of domain embed-certs-209025 in network mk-embed-certs-209025
	I1206 19:55:26.060843  115591 main.go:141] libmachine: (embed-certs-209025) DBG | I1206 19:55:26.060725  116938 retry.go:31] will retry after 422.434651ms: waiting for machine to come up
	I1206 19:55:26.484420  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:26.484967  115591 main.go:141] libmachine: (embed-certs-209025) DBG | unable to find current IP address of domain embed-certs-209025 in network mk-embed-certs-209025
	I1206 19:55:26.485003  115591 main.go:141] libmachine: (embed-certs-209025) DBG | I1206 19:55:26.484931  116938 retry.go:31] will retry after 584.854153ms: waiting for machine to come up
	I1206 19:55:27.071766  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:27.072298  115591 main.go:141] libmachine: (embed-certs-209025) DBG | unable to find current IP address of domain embed-certs-209025 in network mk-embed-certs-209025
	I1206 19:55:27.072325  115591 main.go:141] libmachine: (embed-certs-209025) DBG | I1206 19:55:27.072233  116938 retry.go:31] will retry after 710.482528ms: waiting for machine to come up
	I1206 19:55:27.784162  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:27.784660  115591 main.go:141] libmachine: (embed-certs-209025) DBG | unable to find current IP address of domain embed-certs-209025 in network mk-embed-certs-209025
	I1206 19:55:27.784695  115591 main.go:141] libmachine: (embed-certs-209025) DBG | I1206 19:55:27.784560  116938 retry.go:31] will retry after 754.279817ms: waiting for machine to come up
	I1206 19:55:28.540261  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:28.540788  115591 main.go:141] libmachine: (embed-certs-209025) DBG | unable to find current IP address of domain embed-certs-209025 in network mk-embed-certs-209025
	I1206 19:55:28.540818  115591 main.go:141] libmachine: (embed-certs-209025) DBG | I1206 19:55:28.540728  116938 retry.go:31] will retry after 1.359726156s: waiting for machine to come up
	I1206 19:55:27.194696  115497 crio.go:444] Took 1.966010 seconds to copy over tarball
	I1206 19:55:27.194774  115497 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1206 19:55:30.501183  115497 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.306375512s)
	I1206 19:55:30.501222  115497 crio.go:451] Took 3.306493 seconds to extract the tarball
	I1206 19:55:30.501249  115497 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1206 19:55:30.542574  115497 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 19:55:30.587381  115497 crio.go:496] all images are preloaded for cri-o runtime.
	I1206 19:55:30.587405  115497 cache_images.go:84] Images are preloaded, skipping loading
	I1206 19:55:30.587483  115497 ssh_runner.go:195] Run: crio config
	I1206 19:55:30.649117  115497 cni.go:84] Creating CNI manager for ""
	I1206 19:55:30.649140  115497 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 19:55:30.649163  115497 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1206 19:55:30.649191  115497 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.22 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-380424 NodeName:default-k8s-diff-port-380424 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.22"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.22 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 19:55:30.649383  115497 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.22
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-380424"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.22
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.22"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 19:55:30.649487  115497 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-380424 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.22
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-380424 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1206 19:55:30.649561  115497 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1206 19:55:30.659186  115497 binaries.go:44] Found k8s binaries, skipping transfer
	I1206 19:55:30.659297  115497 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 19:55:30.668534  115497 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (387 bytes)
	I1206 19:55:30.684815  115497 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 19:55:30.701801  115497 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2112 bytes)
	I1206 19:55:30.721756  115497 ssh_runner.go:195] Run: grep 192.168.72.22	control-plane.minikube.internal$ /etc/hosts
	I1206 19:55:30.726656  115497 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.22	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 19:55:30.738943  115497 certs.go:56] Setting up /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/default-k8s-diff-port-380424 for IP: 192.168.72.22
	I1206 19:55:30.738981  115497 certs.go:190] acquiring lock for shared ca certs: {Name:mkf8fbf7b590617ef4dc6c3a4acb742ae26f89ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 19:55:30.739159  115497 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.key
	I1206 19:55:30.739219  115497 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.key
	I1206 19:55:30.739322  115497 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/default-k8s-diff-port-380424/client.key
	I1206 19:55:30.739426  115497 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/default-k8s-diff-port-380424/apiserver.key.99d663cb
	I1206 19:55:30.739489  115497 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/default-k8s-diff-port-380424/proxy-client.key
	I1206 19:55:30.739629  115497 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834.pem (1338 bytes)
	W1206 19:55:30.739672  115497 certs.go:433] ignoring /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834_empty.pem, impossibly tiny 0 bytes
	I1206 19:55:30.739689  115497 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 19:55:30.739726  115497 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem (1082 bytes)
	I1206 19:55:30.739762  115497 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem (1123 bytes)
	I1206 19:55:30.739801  115497 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem (1679 bytes)
	I1206 19:55:30.739872  115497 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem (1708 bytes)
	I1206 19:55:30.740532  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/default-k8s-diff-port-380424/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1206 19:55:30.766689  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/default-k8s-diff-port-380424/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 19:55:30.792892  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/default-k8s-diff-port-380424/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 19:55:30.817640  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/default-k8s-diff-port-380424/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 19:55:30.842916  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 19:55:30.868057  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 19:55:30.893993  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 19:55:30.924631  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 19:55:30.953503  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem --> /usr/share/ca-certificates/708342.pem (1708 bytes)
	I1206 19:55:30.980162  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 19:55:31.007247  115497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834.pem --> /usr/share/ca-certificates/70834.pem (1338 bytes)
	I1206 19:55:31.034274  115497 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 19:55:31.054544  115497 ssh_runner.go:195] Run: openssl version
	I1206 19:55:31.062053  115497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1206 19:55:31.077159  115497 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:55:31.083640  115497 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  6 18:41 /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:55:31.083707  115497 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:55:31.091093  115497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1206 19:55:31.105305  115497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/70834.pem && ln -fs /usr/share/ca-certificates/70834.pem /etc/ssl/certs/70834.pem"
	I1206 19:55:31.117767  115497 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/70834.pem
	I1206 19:55:31.123703  115497 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  6 18:50 /usr/share/ca-certificates/70834.pem
	I1206 19:55:31.123798  115497 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/70834.pem
	I1206 19:55:31.131531  115497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/70834.pem /etc/ssl/certs/51391683.0"
	I1206 19:55:31.142449  115497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/708342.pem && ln -fs /usr/share/ca-certificates/708342.pem /etc/ssl/certs/708342.pem"
	I1206 19:55:31.157311  115497 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/708342.pem
	I1206 19:55:31.163707  115497 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  6 18:50 /usr/share/ca-certificates/708342.pem
	I1206 19:55:31.163783  115497 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/708342.pem
	I1206 19:55:31.170831  115497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/708342.pem /etc/ssl/certs/3ec20f2e.0"
	I1206 19:55:31.183300  115497 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1206 19:55:31.188165  115497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 19:55:31.194562  115497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 19:55:31.201769  115497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 19:55:31.209562  115497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 19:55:31.217346  115497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 19:55:31.225522  115497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 19:55:31.233755  115497 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-380424 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-380424 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.22 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extra
Disks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 19:55:31.233889  115497 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 19:55:31.233952  115497 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 19:55:31.278891  115497 cri.go:89] found id: ""
	I1206 19:55:31.278972  115497 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 19:55:31.291971  115497 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1206 19:55:31.291999  115497 kubeadm.go:636] restartCluster start
	I1206 19:55:31.292070  115497 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 19:55:31.304934  115497 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:31.306156  115497 kubeconfig.go:92] found "default-k8s-diff-port-380424" server: "https://192.168.72.22:8444"
	I1206 19:55:31.308710  115497 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 19:55:31.321910  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:31.321976  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:31.339075  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:31.339096  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:31.339143  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:31.354436  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:32.765826  115217 api_server.go:269] stopped: https://192.168.61.33:8443/healthz: Get "https://192.168.61.33:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1206 19:55:32.765895  115217 api_server.go:253] Checking apiserver healthz at https://192.168.61.33:8443/healthz ...
	I1206 19:55:29.902670  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:29.903123  115591 main.go:141] libmachine: (embed-certs-209025) DBG | unable to find current IP address of domain embed-certs-209025 in network mk-embed-certs-209025
	I1206 19:55:29.903152  115591 main.go:141] libmachine: (embed-certs-209025) DBG | I1206 19:55:29.903081  116938 retry.go:31] will retry after 1.188380941s: waiting for machine to come up
	I1206 19:55:31.092707  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:31.093278  115591 main.go:141] libmachine: (embed-certs-209025) DBG | unable to find current IP address of domain embed-certs-209025 in network mk-embed-certs-209025
	I1206 19:55:31.093311  115591 main.go:141] libmachine: (embed-certs-209025) DBG | I1206 19:55:31.093245  116938 retry.go:31] will retry after 1.854046475s: waiting for machine to come up
	I1206 19:55:32.948423  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:32.948866  115591 main.go:141] libmachine: (embed-certs-209025) DBG | unable to find current IP address of domain embed-certs-209025 in network mk-embed-certs-209025
	I1206 19:55:32.948891  115591 main.go:141] libmachine: (embed-certs-209025) DBG | I1206 19:55:32.948827  116938 retry.go:31] will retry after 2.868825903s: waiting for machine to come up
	I1206 19:55:34.066100  115217 api_server.go:279] https://192.168.61.33:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1206 19:55:34.066146  115217 api_server.go:103] status: https://192.168.61.33:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1206 19:55:34.566904  115217 api_server.go:253] Checking apiserver healthz at https://192.168.61.33:8443/healthz ...
	I1206 19:55:34.573643  115217 api_server.go:279] https://192.168.61.33:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1206 19:55:34.573675  115217 api_server.go:103] status: https://192.168.61.33:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1206 19:55:35.066235  115217 api_server.go:253] Checking apiserver healthz at https://192.168.61.33:8443/healthz ...
	I1206 19:55:35.076927  115217 api_server.go:279] https://192.168.61.33:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1206 19:55:35.076966  115217 api_server.go:103] status: https://192.168.61.33:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1206 19:55:35.566361  115217 api_server.go:253] Checking apiserver healthz at https://192.168.61.33:8443/healthz ...
	I1206 19:55:35.574853  115217 api_server.go:279] https://192.168.61.33:8443/healthz returned 200:
	ok
	I1206 19:55:35.585855  115217 api_server.go:141] control plane version: v1.16.0
	I1206 19:55:35.585895  115217 api_server.go:131] duration metric: took 7.822706447s to wait for apiserver health ...
	I1206 19:55:35.585908  115217 cni.go:84] Creating CNI manager for ""
	I1206 19:55:35.585917  115217 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 19:55:35.587984  115217 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1206 19:55:31.855148  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:31.855275  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:31.867628  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:32.355238  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:32.355330  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:32.368154  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:32.854710  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:32.854820  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:32.870926  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:33.355493  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:33.355586  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:33.371984  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:33.854511  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:33.854604  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:33.871260  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:34.354793  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:34.354897  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:34.371333  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:34.855487  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:34.855575  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:34.868348  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:35.354949  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:35.355026  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:35.367357  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:35.854910  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:35.855003  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:35.871382  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:36.354908  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:36.355047  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:36.371112  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:35.589529  115217 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1206 19:55:35.599454  115217 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1206 19:55:35.616803  115217 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 19:55:35.626793  115217 system_pods.go:59] 7 kube-system pods found
	I1206 19:55:35.626829  115217 system_pods.go:61] "coredns-5644d7b6d9-nrtk9" [447f7434-3f97-4e3f-9451-d9a54bff7ba1] Running
	I1206 19:55:35.626837  115217 system_pods.go:61] "etcd-old-k8s-version-448851" [77c1f822-788f-4f28-8f8e-54278d5d9e10] Running
	I1206 19:55:35.626843  115217 system_pods.go:61] "kube-apiserver-old-k8s-version-448851" [d3cf3d55-8862-4f81-ac61-99b202469859] Running
	I1206 19:55:35.626851  115217 system_pods.go:61] "kube-controller-manager-old-k8s-version-448851" [58ffb9bc-b5a3-4c64-a78f-da0011e6c277] Running
	I1206 19:55:35.626869  115217 system_pods.go:61] "kube-proxy-sw4qv" [6c08ab4a-447b-42e9-a617-ac35d66cf4ea] Running
	I1206 19:55:35.626879  115217 system_pods.go:61] "kube-scheduler-old-k8s-version-448851" [378ead75-3fd6-4cfd-a063-f2afc3a1cd12] Running
	I1206 19:55:35.626886  115217 system_pods.go:61] "storage-provisioner" [cce901c3-37d9-4ae2-ab9c-99bb7fda6d23] Running
	I1206 19:55:35.626901  115217 system_pods.go:74] duration metric: took 10.069819ms to wait for pod list to return data ...
	I1206 19:55:35.626910  115217 node_conditions.go:102] verifying NodePressure condition ...
	I1206 19:55:35.632164  115217 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1206 19:55:35.632240  115217 node_conditions.go:123] node cpu capacity is 2
	I1206 19:55:35.632256  115217 node_conditions.go:105] duration metric: took 5.340532ms to run NodePressure ...
	I1206 19:55:35.632282  115217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:55:35.925990  115217 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1206 19:55:35.935849  115217 retry.go:31] will retry after 256.122518ms: kubelet not initialised
	I1206 19:55:36.197872  115217 retry.go:31] will retry after 337.717759ms: kubelet not initialised
	I1206 19:55:36.541368  115217 retry.go:31] will retry after 784.037462ms: kubelet not initialised
	I1206 19:55:37.331284  115217 retry.go:31] will retry after 921.381118ms: kubelet not initialised
	I1206 19:55:35.819131  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:35.819759  115591 main.go:141] libmachine: (embed-certs-209025) DBG | unable to find current IP address of domain embed-certs-209025 in network mk-embed-certs-209025
	I1206 19:55:35.819793  115591 main.go:141] libmachine: (embed-certs-209025) DBG | I1206 19:55:35.819698  116938 retry.go:31] will retry after 2.281000862s: waiting for machine to come up
	I1206 19:55:38.103281  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:38.103807  115591 main.go:141] libmachine: (embed-certs-209025) DBG | unable to find current IP address of domain embed-certs-209025 in network mk-embed-certs-209025
	I1206 19:55:38.103845  115591 main.go:141] libmachine: (embed-certs-209025) DBG | I1206 19:55:38.103736  116938 retry.go:31] will retry after 3.076134377s: waiting for machine to come up
	I1206 19:55:36.855191  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:36.855309  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:36.872110  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:37.354562  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:37.354682  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:37.370156  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:37.854600  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:37.854726  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:37.870621  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:38.355289  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:38.355391  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:38.368595  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:38.855116  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:38.855218  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:38.868455  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:39.354955  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:39.355048  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:39.368875  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:39.854833  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:39.854928  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:39.866765  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:40.354989  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:40.355106  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:40.367526  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:40.854791  115497 api_server.go:166] Checking apiserver status ...
	I1206 19:55:40.854873  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:40.866579  115497 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:41.322422  115497 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1206 19:55:41.322456  115497 kubeadm.go:1135] stopping kube-system containers ...
	I1206 19:55:41.322472  115497 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1206 19:55:41.322548  115497 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 19:55:41.360234  115497 cri.go:89] found id: ""
	I1206 19:55:41.360301  115497 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1206 19:55:41.376968  115497 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 19:55:41.387639  115497 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 19:55:41.387694  115497 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 19:55:41.397586  115497 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1206 19:55:41.397617  115497 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:55:38.258758  115217 retry.go:31] will retry after 961.817778ms: kubelet not initialised
	I1206 19:55:39.225505  115217 retry.go:31] will retry after 1.751905914s: kubelet not initialised
	I1206 19:55:40.982344  115217 retry.go:31] will retry after 1.649102014s: kubelet not initialised
	I1206 19:55:42.639410  115217 retry.go:31] will retry after 3.317462401s: kubelet not initialised
	I1206 19:55:41.182443  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:41.182893  115591 main.go:141] libmachine: (embed-certs-209025) DBG | unable to find current IP address of domain embed-certs-209025 in network mk-embed-certs-209025
	I1206 19:55:41.182930  115591 main.go:141] libmachine: (embed-certs-209025) DBG | I1206 19:55:41.182837  116938 retry.go:31] will retry after 4.029797575s: waiting for machine to come up
	I1206 19:55:41.519134  115497 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:55:42.404075  115497 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:55:42.613308  115497 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:55:42.707533  115497 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:55:42.796041  115497 api_server.go:52] waiting for apiserver process to appear ...
	I1206 19:55:42.796139  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:55:42.816782  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:55:43.336582  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:55:43.836183  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:55:44.336879  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:55:44.836718  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:55:45.336249  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:55:45.363947  115497 api_server.go:72] duration metric: took 2.567911355s to wait for apiserver process to appear ...
	I1206 19:55:45.363968  115497 api_server.go:88] waiting for apiserver healthz status ...
	I1206 19:55:45.363984  115497 api_server.go:253] Checking apiserver healthz at https://192.168.72.22:8444/healthz ...
	I1206 19:55:46.486502  115078 start.go:369] acquired machines lock for "no-preload-989559" in 57.98684139s
	I1206 19:55:46.486560  115078 start.go:96] Skipping create...Using existing machine configuration
	I1206 19:55:46.486570  115078 fix.go:54] fixHost starting: 
	I1206 19:55:46.487006  115078 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:55:46.487052  115078 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:55:46.506170  115078 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32893
	I1206 19:55:46.506576  115078 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:55:46.507081  115078 main.go:141] libmachine: Using API Version  1
	I1206 19:55:46.507110  115078 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:55:46.507412  115078 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:55:46.507600  115078 main.go:141] libmachine: (no-preload-989559) Calling .DriverName
	I1206 19:55:46.508110  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetState
	I1206 19:55:46.509817  115078 fix.go:102] recreateIfNeeded on no-preload-989559: state=Stopped err=<nil>
	I1206 19:55:46.509843  115078 main.go:141] libmachine: (no-preload-989559) Calling .DriverName
	W1206 19:55:46.509988  115078 fix.go:128] unexpected machine state, will restart: <nil>
	I1206 19:55:46.512103  115078 out.go:177] * Restarting existing kvm2 VM for "no-preload-989559" ...
	I1206 19:55:45.214823  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.215271  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has current primary IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.215293  115591 main.go:141] libmachine: (embed-certs-209025) Found IP for machine: 192.168.50.164
	I1206 19:55:45.215341  115591 main.go:141] libmachine: (embed-certs-209025) Reserving static IP address...
	I1206 19:55:45.215738  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "embed-certs-209025", mac: "52:54:00:4d:27:5b", ip: "192.168.50.164"} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:45.215772  115591 main.go:141] libmachine: (embed-certs-209025) DBG | skip adding static IP to network mk-embed-certs-209025 - found existing host DHCP lease matching {name: "embed-certs-209025", mac: "52:54:00:4d:27:5b", ip: "192.168.50.164"}
	I1206 19:55:45.215787  115591 main.go:141] libmachine: (embed-certs-209025) Reserved static IP address: 192.168.50.164
	I1206 19:55:45.215805  115591 main.go:141] libmachine: (embed-certs-209025) Waiting for SSH to be available...
	I1206 19:55:45.215821  115591 main.go:141] libmachine: (embed-certs-209025) DBG | Getting to WaitForSSH function...
	I1206 19:55:45.217850  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.218192  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:45.218219  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.218370  115591 main.go:141] libmachine: (embed-certs-209025) DBG | Using SSH client type: external
	I1206 19:55:45.218404  115591 main.go:141] libmachine: (embed-certs-209025) DBG | Using SSH private key: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/embed-certs-209025/id_rsa (-rw-------)
	I1206 19:55:45.218438  115591 main.go:141] libmachine: (embed-certs-209025) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.164 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17740-63652/.minikube/machines/embed-certs-209025/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1206 19:55:45.218452  115591 main.go:141] libmachine: (embed-certs-209025) DBG | About to run SSH command:
	I1206 19:55:45.218475  115591 main.go:141] libmachine: (embed-certs-209025) DBG | exit 0
	I1206 19:55:45.309353  115591 main.go:141] libmachine: (embed-certs-209025) DBG | SSH cmd err, output: <nil>: 
	I1206 19:55:45.309758  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetConfigRaw
	I1206 19:55:45.310547  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetIP
	I1206 19:55:45.313019  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.313334  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:45.313369  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.313638  115591 profile.go:148] Saving config to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/embed-certs-209025/config.json ...
	I1206 19:55:45.313844  115591 machine.go:88] provisioning docker machine ...
	I1206 19:55:45.313870  115591 main.go:141] libmachine: (embed-certs-209025) Calling .DriverName
	I1206 19:55:45.314081  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetMachineName
	I1206 19:55:45.314264  115591 buildroot.go:166] provisioning hostname "embed-certs-209025"
	I1206 19:55:45.314298  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetMachineName
	I1206 19:55:45.314509  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHHostname
	I1206 19:55:45.316952  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.317361  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:45.317395  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.317640  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHPort
	I1206 19:55:45.317821  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 19:55:45.317954  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 19:55:45.318079  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHUsername
	I1206 19:55:45.318235  115591 main.go:141] libmachine: Using SSH client type: native
	I1206 19:55:45.318665  115591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.164 22 <nil> <nil>}
	I1206 19:55:45.318683  115591 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-209025 && echo "embed-certs-209025" | sudo tee /etc/hostname
	I1206 19:55:45.459071  115591 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-209025
	
	I1206 19:55:45.459107  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHHostname
	I1206 19:55:45.461953  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.462334  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:45.462362  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.462592  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHPort
	I1206 19:55:45.462814  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 19:55:45.463010  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 19:55:45.463162  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHUsername
	I1206 19:55:45.463353  115591 main.go:141] libmachine: Using SSH client type: native
	I1206 19:55:45.463887  115591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.164 22 <nil> <nil>}
	I1206 19:55:45.463916  115591 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-209025' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-209025/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-209025' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 19:55:45.597186  115591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1206 19:55:45.597220  115591 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17740-63652/.minikube CaCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17740-63652/.minikube}
	I1206 19:55:45.597253  115591 buildroot.go:174] setting up certificates
	I1206 19:55:45.597270  115591 provision.go:83] configureAuth start
	I1206 19:55:45.597288  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetMachineName
	I1206 19:55:45.597658  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetIP
	I1206 19:55:45.600582  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.600954  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:45.600983  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.601138  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHHostname
	I1206 19:55:45.603355  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.603746  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:45.603774  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.603942  115591 provision.go:138] copyHostCerts
	I1206 19:55:45.604012  115591 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem, removing ...
	I1206 19:55:45.604037  115591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem
	I1206 19:55:45.604113  115591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem (1082 bytes)
	I1206 19:55:45.604227  115591 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem, removing ...
	I1206 19:55:45.604243  115591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem
	I1206 19:55:45.604277  115591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem (1123 bytes)
	I1206 19:55:45.604353  115591 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem, removing ...
	I1206 19:55:45.604363  115591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem
	I1206 19:55:45.604390  115591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem (1679 bytes)
	I1206 19:55:45.604454  115591 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem org=jenkins.embed-certs-209025 san=[192.168.50.164 192.168.50.164 localhost 127.0.0.1 minikube embed-certs-209025]
	I1206 19:55:45.706944  115591 provision.go:172] copyRemoteCerts
	I1206 19:55:45.707028  115591 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 19:55:45.707069  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHHostname
	I1206 19:55:45.709985  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.710357  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:45.710398  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.710530  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHPort
	I1206 19:55:45.710738  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 19:55:45.710917  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHUsername
	I1206 19:55:45.711092  115591 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/embed-certs-209025/id_rsa Username:docker}
	I1206 19:55:45.807035  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 19:55:45.831480  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 19:55:45.855902  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1206 19:55:45.882797  115591 provision.go:86] duration metric: configureAuth took 285.508678ms
	I1206 19:55:45.882831  115591 buildroot.go:189] setting minikube options for container-runtime
	I1206 19:55:45.883074  115591 config.go:182] Loaded profile config "embed-certs-209025": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 19:55:45.883156  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHHostname
	I1206 19:55:45.886130  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.886576  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:45.886611  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:45.886825  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHPort
	I1206 19:55:45.887026  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 19:55:45.887198  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 19:55:45.887348  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHUsername
	I1206 19:55:45.887570  115591 main.go:141] libmachine: Using SSH client type: native
	I1206 19:55:45.887900  115591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.164 22 <nil> <nil>}
	I1206 19:55:45.887926  115591 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 19:55:46.217654  115591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 19:55:46.217732  115591 machine.go:91] provisioned docker machine in 903.869734ms
	I1206 19:55:46.217748  115591 start.go:300] post-start starting for "embed-certs-209025" (driver="kvm2")
	I1206 19:55:46.217762  115591 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 19:55:46.217788  115591 main.go:141] libmachine: (embed-certs-209025) Calling .DriverName
	I1206 19:55:46.218154  115591 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 19:55:46.218190  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHHostname
	I1206 19:55:46.220968  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:46.221345  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:46.221378  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:46.221557  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHPort
	I1206 19:55:46.221781  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 19:55:46.221951  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHUsername
	I1206 19:55:46.222093  115591 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/embed-certs-209025/id_rsa Username:docker}
	I1206 19:55:46.316289  115591 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 19:55:46.321014  115591 info.go:137] Remote host: Buildroot 2021.02.12
	I1206 19:55:46.321046  115591 filesync.go:126] Scanning /home/jenkins/minikube-integration/17740-63652/.minikube/addons for local assets ...
	I1206 19:55:46.321108  115591 filesync.go:126] Scanning /home/jenkins/minikube-integration/17740-63652/.minikube/files for local assets ...
	I1206 19:55:46.321183  115591 filesync.go:149] local asset: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem -> 708342.pem in /etc/ssl/certs
	I1206 19:55:46.321304  115591 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 19:55:46.331967  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem --> /etc/ssl/certs/708342.pem (1708 bytes)
	I1206 19:55:46.358983  115591 start.go:303] post-start completed in 141.214825ms
	I1206 19:55:46.359014  115591 fix.go:56] fixHost completed within 22.668688221s
	I1206 19:55:46.359037  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHHostname
	I1206 19:55:46.361846  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:46.362179  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:46.362212  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:46.362452  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHPort
	I1206 19:55:46.362704  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 19:55:46.362897  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 19:55:46.363073  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHUsername
	I1206 19:55:46.363310  115591 main.go:141] libmachine: Using SSH client type: native
	I1206 19:55:46.363803  115591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.164 22 <nil> <nil>}
	I1206 19:55:46.363823  115591 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1206 19:55:46.486321  115591 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701892546.422221924
	
	I1206 19:55:46.486350  115591 fix.go:206] guest clock: 1701892546.422221924
	I1206 19:55:46.486361  115591 fix.go:219] Guest: 2023-12-06 19:55:46.422221924 +0000 UTC Remote: 2023-12-06 19:55:46.359018 +0000 UTC m=+296.897065855 (delta=63.203924ms)
	I1206 19:55:46.486385  115591 fix.go:190] guest clock delta is within tolerance: 63.203924ms
	I1206 19:55:46.486391  115591 start.go:83] releasing machines lock for "embed-certs-209025", held for 22.796102432s
	I1206 19:55:46.486420  115591 main.go:141] libmachine: (embed-certs-209025) Calling .DriverName
	I1206 19:55:46.486727  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetIP
	I1206 19:55:46.489589  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:46.489890  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:46.489922  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:46.490079  115591 main.go:141] libmachine: (embed-certs-209025) Calling .DriverName
	I1206 19:55:46.490643  115591 main.go:141] libmachine: (embed-certs-209025) Calling .DriverName
	I1206 19:55:46.490836  115591 main.go:141] libmachine: (embed-certs-209025) Calling .DriverName
	I1206 19:55:46.490924  115591 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 19:55:46.490974  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHHostname
	I1206 19:55:46.491257  115591 ssh_runner.go:195] Run: cat /version.json
	I1206 19:55:46.491281  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHHostname
	I1206 19:55:46.494034  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:46.494326  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:46.494379  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:46.494405  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:46.494704  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:46.494704  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHPort
	I1206 19:55:46.494748  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:46.494900  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 19:55:46.494958  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHPort
	I1206 19:55:46.495019  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHUsername
	I1206 19:55:46.495144  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 19:55:46.495137  115591 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/embed-certs-209025/id_rsa Username:docker}
	I1206 19:55:46.495269  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHUsername
	I1206 19:55:46.495397  115591 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/embed-certs-209025/id_rsa Username:docker}
	I1206 19:55:46.587575  115591 ssh_runner.go:195] Run: systemctl --version
	I1206 19:55:46.614901  115591 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 19:55:46.764133  115591 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 19:55:46.771049  115591 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 19:55:46.771133  115591 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 19:55:46.786157  115591 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1206 19:55:46.786187  115591 start.go:475] detecting cgroup driver to use...
	I1206 19:55:46.786262  115591 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 19:55:46.801158  115591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 19:55:46.812881  115591 docker.go:203] disabling cri-docker service (if available) ...
	I1206 19:55:46.812948  115591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 19:55:46.825139  115591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 19:55:46.838071  115591 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 19:55:46.949823  115591 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 19:55:47.080490  115591 docker.go:219] disabling docker service ...
	I1206 19:55:47.080572  115591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 19:55:47.094773  115591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 19:55:47.107963  115591 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 19:55:47.233536  115591 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 19:55:47.360425  115591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 19:55:47.377453  115591 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 19:55:47.395959  115591 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1206 19:55:47.396026  115591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:55:47.406599  115591 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 19:55:47.406696  115591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:55:47.417082  115591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:55:47.427463  115591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:55:47.438246  115591 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 19:55:47.449910  115591 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 19:55:47.459620  115591 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1206 19:55:47.459675  115591 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1206 19:55:47.476230  115591 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 19:55:47.486777  115591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 19:55:47.597395  115591 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 19:55:47.809260  115591 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 19:55:47.809348  115591 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 19:55:47.815968  115591 start.go:543] Will wait 60s for crictl version
	I1206 19:55:47.816035  115591 ssh_runner.go:195] Run: which crictl
	I1206 19:55:47.820214  115591 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1206 19:55:47.869345  115591 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1206 19:55:47.869435  115591 ssh_runner.go:195] Run: crio --version
	I1206 19:55:47.923602  115591 ssh_runner.go:195] Run: crio --version
	I1206 19:55:47.983187  115591 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1206 19:55:45.963265  115217 retry.go:31] will retry after 4.496095904s: kubelet not initialised
	I1206 19:55:47.984954  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetIP
	I1206 19:55:47.988218  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:47.988742  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 19:55:47.988775  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 19:55:47.989031  115591 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1206 19:55:47.994471  115591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 19:55:48.008964  115591 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1206 19:55:48.009022  115591 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 19:55:48.056234  115591 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1206 19:55:48.056333  115591 ssh_runner.go:195] Run: which lz4
	I1206 19:55:48.061573  115591 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1206 19:55:48.066119  115591 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1206 19:55:48.066156  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1206 19:55:46.513897  115078 main.go:141] libmachine: (no-preload-989559) Calling .Start
	I1206 19:55:46.514072  115078 main.go:141] libmachine: (no-preload-989559) Ensuring networks are active...
	I1206 19:55:46.514830  115078 main.go:141] libmachine: (no-preload-989559) Ensuring network default is active
	I1206 19:55:46.515153  115078 main.go:141] libmachine: (no-preload-989559) Ensuring network mk-no-preload-989559 is active
	I1206 19:55:46.515533  115078 main.go:141] libmachine: (no-preload-989559) Getting domain xml...
	I1206 19:55:46.516251  115078 main.go:141] libmachine: (no-preload-989559) Creating domain...
	I1206 19:55:47.899847  115078 main.go:141] libmachine: (no-preload-989559) Waiting to get IP...
	I1206 19:55:47.900939  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:55:47.901513  115078 main.go:141] libmachine: (no-preload-989559) DBG | unable to find current IP address of domain no-preload-989559 in network mk-no-preload-989559
	I1206 19:55:47.901634  115078 main.go:141] libmachine: (no-preload-989559) DBG | I1206 19:55:47.901487  117094 retry.go:31] will retry after 244.343929ms: waiting for machine to come up
	I1206 19:55:48.148254  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:55:48.148888  115078 main.go:141] libmachine: (no-preload-989559) DBG | unable to find current IP address of domain no-preload-989559 in network mk-no-preload-989559
	I1206 19:55:48.148927  115078 main.go:141] libmachine: (no-preload-989559) DBG | I1206 19:55:48.148835  117094 retry.go:31] will retry after 258.755356ms: waiting for machine to come up
	I1206 19:55:48.409550  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:55:48.410401  115078 main.go:141] libmachine: (no-preload-989559) DBG | unable to find current IP address of domain no-preload-989559 in network mk-no-preload-989559
	I1206 19:55:48.410427  115078 main.go:141] libmachine: (no-preload-989559) DBG | I1206 19:55:48.410308  117094 retry.go:31] will retry after 321.790541ms: waiting for machine to come up
	I1206 19:55:48.734055  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:55:48.734744  115078 main.go:141] libmachine: (no-preload-989559) DBG | unable to find current IP address of domain no-preload-989559 in network mk-no-preload-989559
	I1206 19:55:48.734768  115078 main.go:141] libmachine: (no-preload-989559) DBG | I1206 19:55:48.734646  117094 retry.go:31] will retry after 464.789653ms: waiting for machine to come up
	I1206 19:55:49.201462  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:55:49.202032  115078 main.go:141] libmachine: (no-preload-989559) DBG | unable to find current IP address of domain no-preload-989559 in network mk-no-preload-989559
	I1206 19:55:49.202065  115078 main.go:141] libmachine: (no-preload-989559) DBG | I1206 19:55:49.201985  117094 retry.go:31] will retry after 541.238407ms: waiting for machine to come up
	I1206 19:55:49.744792  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:55:49.745432  115078 main.go:141] libmachine: (no-preload-989559) DBG | unable to find current IP address of domain no-preload-989559 in network mk-no-preload-989559
	I1206 19:55:49.745461  115078 main.go:141] libmachine: (no-preload-989559) DBG | I1206 19:55:49.745338  117094 retry.go:31] will retry after 791.407194ms: waiting for machine to come up
	I1206 19:55:50.538151  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:55:50.538857  115078 main.go:141] libmachine: (no-preload-989559) DBG | unable to find current IP address of domain no-preload-989559 in network mk-no-preload-989559
	I1206 19:55:50.538883  115078 main.go:141] libmachine: (no-preload-989559) DBG | I1206 19:55:50.538741  117094 retry.go:31] will retry after 1.11510814s: waiting for machine to come up
	I1206 19:55:49.730248  115497 api_server.go:279] https://192.168.72.22:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1206 19:55:49.730287  115497 api_server.go:103] status: https://192.168.72.22:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1206 19:55:49.730318  115497 api_server.go:253] Checking apiserver healthz at https://192.168.72.22:8444/healthz ...
	I1206 19:55:49.788747  115497 api_server.go:279] https://192.168.72.22:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1206 19:55:49.788796  115497 api_server.go:103] status: https://192.168.72.22:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1206 19:55:50.289144  115497 api_server.go:253] Checking apiserver healthz at https://192.168.72.22:8444/healthz ...
	I1206 19:55:50.301437  115497 api_server.go:279] https://192.168.72.22:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1206 19:55:50.301479  115497 api_server.go:103] status: https://192.168.72.22:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1206 19:55:50.789018  115497 api_server.go:253] Checking apiserver healthz at https://192.168.72.22:8444/healthz ...
	I1206 19:55:50.800325  115497 api_server.go:279] https://192.168.72.22:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1206 19:55:50.800374  115497 api_server.go:103] status: https://192.168.72.22:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1206 19:55:51.289899  115497 api_server.go:253] Checking apiserver healthz at https://192.168.72.22:8444/healthz ...
	I1206 19:55:51.297638  115497 api_server.go:279] https://192.168.72.22:8444/healthz returned 200:
	ok
	I1206 19:55:51.310738  115497 api_server.go:141] control plane version: v1.28.4
	I1206 19:55:51.310772  115497 api_server.go:131] duration metric: took 5.946796569s to wait for apiserver health ...
	I1206 19:55:51.310784  115497 cni.go:84] Creating CNI manager for ""
	I1206 19:55:51.310793  115497 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 19:55:51.312719  115497 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1206 19:55:51.314431  115497 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1206 19:55:51.335045  115497 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1206 19:55:51.365598  115497 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 19:55:51.381865  115497 system_pods.go:59] 8 kube-system pods found
	I1206 19:55:51.381914  115497 system_pods.go:61] "coredns-5dd5756b68-4rgxf" [2ae6daa5-430f-4f14-a40c-c29f4757fb06] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 19:55:51.381936  115497 system_pods.go:61] "etcd-default-k8s-diff-port-380424" [895b0cdf-86c9-4b14-a633-4b172471cd2c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1206 19:55:51.381947  115497 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-380424" [ccc042d4-cd4c-4769-adc6-99d792146d72] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1206 19:55:51.381963  115497 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-380424" [b3fbba6f-fa71-489e-81b0-0196bb019273] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 19:55:51.381972  115497 system_pods.go:61] "kube-proxy-9ftnp" [4389fff8-1b22-47a5-af97-35a4e5b6c2b3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1206 19:55:51.381981  115497 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-380424" [b53c666c-cc84-4dd3-b208-35d04113381c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1206 19:55:51.381997  115497 system_pods.go:61] "metrics-server-57f55c9bc5-7bblg" [3a6477d9-cb91-48cb-ba03-8b669db53841] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 19:55:51.382006  115497 system_pods.go:61] "storage-provisioner" [b8f06027-e37c-4c09-b361-4d70af65c991] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 19:55:51.382020  115497 system_pods.go:74] duration metric: took 16.393796ms to wait for pod list to return data ...
	I1206 19:55:51.382041  115497 node_conditions.go:102] verifying NodePressure condition ...
	I1206 19:55:51.389181  115497 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1206 19:55:51.389242  115497 node_conditions.go:123] node cpu capacity is 2
	I1206 19:55:51.389256  115497 node_conditions.go:105] duration metric: took 7.208817ms to run NodePressure ...
	I1206 19:55:51.389285  115497 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:55:50.466490  115217 retry.go:31] will retry after 11.434043258s: kubelet not initialised
	I1206 19:55:49.900059  115591 crio.go:444] Took 1.838540 seconds to copy over tarball
	I1206 19:55:49.900171  115591 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1206 19:55:53.471724  115591 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.571512743s)
	I1206 19:55:53.471757  115591 crio.go:451] Took 3.571659 seconds to extract the tarball
	I1206 19:55:53.471770  115591 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1206 19:55:53.522151  115591 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 19:55:53.578068  115591 crio.go:496] all images are preloaded for cri-o runtime.
	I1206 19:55:53.578167  115591 cache_images.go:84] Images are preloaded, skipping loading
	I1206 19:55:53.578285  115591 ssh_runner.go:195] Run: crio config
	I1206 19:55:53.650688  115591 cni.go:84] Creating CNI manager for ""
	I1206 19:55:53.650715  115591 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 19:55:53.650736  115591 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1206 19:55:53.650762  115591 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.164 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-209025 NodeName:embed-certs-209025 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.164"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.164 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 19:55:53.650938  115591 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.164
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-209025"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.164
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.164"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 19:55:53.651025  115591 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-209025 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.164
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-209025 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1206 19:55:53.651093  115591 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1206 19:55:53.663792  115591 binaries.go:44] Found k8s binaries, skipping transfer
	I1206 19:55:53.663869  115591 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 19:55:53.674126  115591 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1206 19:55:53.692175  115591 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 19:55:53.708842  115591 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1206 19:55:53.726141  115591 ssh_runner.go:195] Run: grep 192.168.50.164	control-plane.minikube.internal$ /etc/hosts
	I1206 19:55:53.730310  115591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.164	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 19:55:53.742456  115591 certs.go:56] Setting up /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/embed-certs-209025 for IP: 192.168.50.164
	I1206 19:55:53.742489  115591 certs.go:190] acquiring lock for shared ca certs: {Name:mkf8fbf7b590617ef4dc6c3a4acb742ae26f89ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 19:55:53.742712  115591 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.key
	I1206 19:55:53.742765  115591 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.key
	I1206 19:55:53.742841  115591 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/embed-certs-209025/client.key
	I1206 19:55:53.742898  115591 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/embed-certs-209025/apiserver.key.d84b90a2
	I1206 19:55:53.742941  115591 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/embed-certs-209025/proxy-client.key
	I1206 19:55:53.743053  115591 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834.pem (1338 bytes)
	W1206 19:55:53.743081  115591 certs.go:433] ignoring /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834_empty.pem, impossibly tiny 0 bytes
	I1206 19:55:53.743096  115591 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 19:55:53.743135  115591 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem (1082 bytes)
	I1206 19:55:53.743172  115591 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem (1123 bytes)
	I1206 19:55:53.743205  115591 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem (1679 bytes)
	I1206 19:55:53.743265  115591 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem (1708 bytes)
	I1206 19:55:53.743932  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/embed-certs-209025/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1206 19:55:53.770792  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/embed-certs-209025/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 19:55:53.795080  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/embed-certs-209025/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 19:55:53.820920  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/embed-certs-209025/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 19:55:53.849068  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 19:55:53.875210  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 19:55:53.900201  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 19:55:53.927067  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 19:55:53.952810  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 19:55:53.979374  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834.pem --> /usr/share/ca-certificates/70834.pem (1338 bytes)
	I1206 19:55:54.005013  115591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem --> /usr/share/ca-certificates/708342.pem (1708 bytes)
	I1206 19:55:54.028072  115591 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 19:55:54.047087  115591 ssh_runner.go:195] Run: openssl version
	I1206 19:55:54.052949  115591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/708342.pem && ln -fs /usr/share/ca-certificates/708342.pem /etc/ssl/certs/708342.pem"
	I1206 19:55:54.064662  115591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/708342.pem
	I1206 19:55:54.069695  115591 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  6 18:50 /usr/share/ca-certificates/708342.pem
	I1206 19:55:54.069767  115591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/708342.pem
	I1206 19:55:54.076520  115591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/708342.pem /etc/ssl/certs/3ec20f2e.0"
	I1206 19:55:54.088312  115591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1206 19:55:54.100303  115591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:55:54.105718  115591 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  6 18:41 /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:55:54.105787  115591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:55:54.111543  115591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1206 19:55:54.124094  115591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/70834.pem && ln -fs /usr/share/ca-certificates/70834.pem /etc/ssl/certs/70834.pem"
	I1206 19:55:54.137418  115591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/70834.pem
	I1206 19:55:54.142536  115591 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  6 18:50 /usr/share/ca-certificates/70834.pem
	I1206 19:55:54.142611  115591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/70834.pem
	I1206 19:55:54.148497  115591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/70834.pem /etc/ssl/certs/51391683.0"
	I1206 19:55:54.160909  115591 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1206 19:55:54.165739  115591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 19:55:54.171884  115591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 19:55:54.179765  115591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 19:55:54.187615  115591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 19:55:54.195156  115591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 19:55:54.203228  115591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 19:55:54.210119  115591 kubeadm.go:404] StartCluster: {Name:embed-certs-209025 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-209025 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.164 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 19:55:54.210251  115591 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 19:55:54.210308  115591 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 19:55:54.258252  115591 cri.go:89] found id: ""
	I1206 19:55:54.258347  115591 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 19:55:54.270699  115591 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1206 19:55:54.270724  115591 kubeadm.go:636] restartCluster start
	I1206 19:55:54.270785  115591 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 19:55:54.281833  115591 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:54.282964  115591 kubeconfig.go:92] found "embed-certs-209025" server: "https://192.168.50.164:8443"
	I1206 19:55:54.285394  115591 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 19:55:54.296437  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:55:54.296545  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:54.309685  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:54.309707  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:55:54.309774  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:54.322265  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:51.655238  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:55:51.655732  115078 main.go:141] libmachine: (no-preload-989559) DBG | unable to find current IP address of domain no-preload-989559 in network mk-no-preload-989559
	I1206 19:55:51.655776  115078 main.go:141] libmachine: (no-preload-989559) DBG | I1206 19:55:51.655642  117094 retry.go:31] will retry after 958.384892ms: waiting for machine to come up
	I1206 19:55:52.616005  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:55:52.616540  115078 main.go:141] libmachine: (no-preload-989559) DBG | unable to find current IP address of domain no-preload-989559 in network mk-no-preload-989559
	I1206 19:55:52.616583  115078 main.go:141] libmachine: (no-preload-989559) DBG | I1206 19:55:52.616471  117094 retry.go:31] will retry after 1.537571193s: waiting for machine to come up
	I1206 19:55:54.155949  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:55:54.156397  115078 main.go:141] libmachine: (no-preload-989559) DBG | unable to find current IP address of domain no-preload-989559 in network mk-no-preload-989559
	I1206 19:55:54.156429  115078 main.go:141] libmachine: (no-preload-989559) DBG | I1206 19:55:54.156344  117094 retry.go:31] will retry after 2.030397746s: waiting for machine to come up
	I1206 19:55:51.771991  115497 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1206 19:55:51.786960  115497 kubeadm.go:787] kubelet initialised
	I1206 19:55:51.787056  115497 kubeadm.go:788] duration metric: took 14.962005ms waiting for restarted kubelet to initialise ...
	I1206 19:55:51.787080  115497 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 19:55:51.799090  115497 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-4rgxf" in "kube-system" namespace to be "Ready" ...
	I1206 19:55:53.845695  115497 pod_ready.go:102] pod "coredns-5dd5756b68-4rgxf" in "kube-system" namespace has status "Ready":"False"
	I1206 19:55:55.850483  115497 pod_ready.go:102] pod "coredns-5dd5756b68-4rgxf" in "kube-system" namespace has status "Ready":"False"
	I1206 19:55:54.823014  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:55:54.823105  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:54.835793  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:55.323393  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:55:55.323491  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:55.337041  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:55.823330  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:55:55.823437  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:55.839489  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:56.323250  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:55:56.323356  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:56.340029  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:56.822585  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:55:56.822700  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:56.835752  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:57.323326  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:55:57.323413  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:57.339916  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:57.823386  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:55:57.823478  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:57.840112  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:58.322441  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:55:58.322557  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:58.335485  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:58.822575  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:55:58.822695  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:58.839495  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:59.323053  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:55:59.323129  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:59.336117  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:55:56.188549  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:55:56.189073  115078 main.go:141] libmachine: (no-preload-989559) DBG | unable to find current IP address of domain no-preload-989559 in network mk-no-preload-989559
	I1206 19:55:56.189105  115078 main.go:141] libmachine: (no-preload-989559) DBG | I1206 19:55:56.189026  117094 retry.go:31] will retry after 2.455387871s: waiting for machine to come up
	I1206 19:55:58.646361  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:55:58.646772  115078 main.go:141] libmachine: (no-preload-989559) DBG | unable to find current IP address of domain no-preload-989559 in network mk-no-preload-989559
	I1206 19:55:58.646804  115078 main.go:141] libmachine: (no-preload-989559) DBG | I1206 19:55:58.646710  117094 retry.go:31] will retry after 3.286246406s: waiting for machine to come up
	I1206 19:55:57.344443  115497 pod_ready.go:92] pod "coredns-5dd5756b68-4rgxf" in "kube-system" namespace has status "Ready":"True"
	I1206 19:55:57.344478  115497 pod_ready.go:81] duration metric: took 5.545343389s waiting for pod "coredns-5dd5756b68-4rgxf" in "kube-system" namespace to be "Ready" ...
	I1206 19:55:57.344492  115497 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 19:55:59.363596  115497 pod_ready.go:102] pod "etcd-default-k8s-diff-port-380424" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:01.363703  115497 pod_ready.go:102] pod "etcd-default-k8s-diff-port-380424" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:01.907869  115217 retry.go:31] will retry after 21.572905296s: kubelet not initialised
	I1206 19:55:59.823000  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:55:59.823148  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:55:59.836153  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:00.322534  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:56:00.322617  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:00.340369  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:00.822851  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:56:00.822947  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:00.836512  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:01.323083  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:56:01.323161  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:01.337092  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:01.822623  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:56:01.822761  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:01.836428  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:02.323125  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:56:02.323213  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:02.336617  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:02.823198  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:56:02.823287  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:02.835923  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:03.322426  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:56:03.322527  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:03.336495  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:03.822711  115591 api_server.go:166] Checking apiserver status ...
	I1206 19:56:03.822803  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:03.836624  115591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:04.297216  115591 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1206 19:56:04.297278  115591 kubeadm.go:1135] stopping kube-system containers ...
	I1206 19:56:04.297295  115591 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1206 19:56:04.297393  115591 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 19:56:04.343930  115591 cri.go:89] found id: ""
	I1206 19:56:04.344015  115591 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1206 19:56:04.364785  115591 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 19:56:04.376251  115591 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 19:56:04.376320  115591 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 19:56:04.387749  115591 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1206 19:56:04.387779  115591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:56:04.511034  115591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:56:01.934204  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:01.934775  115078 main.go:141] libmachine: (no-preload-989559) DBG | unable to find current IP address of domain no-preload-989559 in network mk-no-preload-989559
	I1206 19:56:01.934798  115078 main.go:141] libmachine: (no-preload-989559) DBG | I1206 19:56:01.934724  117094 retry.go:31] will retry after 2.967009815s: waiting for machine to come up
	I1206 19:56:04.903296  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:04.903725  115078 main.go:141] libmachine: (no-preload-989559) DBG | unable to find current IP address of domain no-preload-989559 in network mk-no-preload-989559
	I1206 19:56:04.903747  115078 main.go:141] libmachine: (no-preload-989559) DBG | I1206 19:56:04.903692  117094 retry.go:31] will retry after 4.817836653s: waiting for machine to come up
	I1206 19:56:03.862804  115497 pod_ready.go:102] pod "etcd-default-k8s-diff-port-380424" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:04.373174  115497 pod_ready.go:92] pod "etcd-default-k8s-diff-port-380424" in "kube-system" namespace has status "Ready":"True"
	I1206 19:56:04.373209  115497 pod_ready.go:81] duration metric: took 7.028708302s waiting for pod "etcd-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:04.373222  115497 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:04.383300  115497 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-380424" in "kube-system" namespace has status "Ready":"True"
	I1206 19:56:04.383324  115497 pod_ready.go:81] duration metric: took 10.094356ms waiting for pod "kube-apiserver-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:04.383333  115497 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:04.390225  115497 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-380424" in "kube-system" namespace has status "Ready":"True"
	I1206 19:56:04.390254  115497 pod_ready.go:81] duration metric: took 6.909695ms waiting for pod "kube-controller-manager-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:04.390267  115497 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9ftnp" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:04.396713  115497 pod_ready.go:92] pod "kube-proxy-9ftnp" in "kube-system" namespace has status "Ready":"True"
	I1206 19:56:04.396753  115497 pod_ready.go:81] duration metric: took 6.477432ms waiting for pod "kube-proxy-9ftnp" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:04.396766  115497 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:04.407015  115497 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-380424" in "kube-system" namespace has status "Ready":"True"
	I1206 19:56:04.407042  115497 pod_ready.go:81] duration metric: took 10.266604ms waiting for pod "kube-scheduler-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:04.407056  115497 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:05.819075  115591 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.307992443s)
	I1206 19:56:05.819111  115591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:56:06.024824  115591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:56:06.120865  115591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:56:06.207869  115591 api_server.go:52] waiting for apiserver process to appear ...
	I1206 19:56:06.207959  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:06.221306  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:06.734164  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:07.234302  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:07.734130  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:08.233726  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:08.734073  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:08.762848  115591 api_server.go:72] duration metric: took 2.554978073s to wait for apiserver process to appear ...
	I1206 19:56:08.762881  115591 api_server.go:88] waiting for apiserver healthz status ...
	I1206 19:56:08.762903  115591 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8443/healthz ...
	I1206 19:56:09.723600  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:09.724078  115078 main.go:141] libmachine: (no-preload-989559) Found IP for machine: 192.168.39.5
	I1206 19:56:09.724107  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has current primary IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:09.724114  115078 main.go:141] libmachine: (no-preload-989559) Reserving static IP address...
	I1206 19:56:09.724466  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "no-preload-989559", mac: "52:54:00:1c:4b:ce", ip: "192.168.39.5"} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:09.724509  115078 main.go:141] libmachine: (no-preload-989559) DBG | skip adding static IP to network mk-no-preload-989559 - found existing host DHCP lease matching {name: "no-preload-989559", mac: "52:54:00:1c:4b:ce", ip: "192.168.39.5"}
	I1206 19:56:09.724526  115078 main.go:141] libmachine: (no-preload-989559) Reserved static IP address: 192.168.39.5
	I1206 19:56:09.724536  115078 main.go:141] libmachine: (no-preload-989559) Waiting for SSH to be available...
	I1206 19:56:09.724546  115078 main.go:141] libmachine: (no-preload-989559) DBG | Getting to WaitForSSH function...
	I1206 19:56:09.726831  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:09.727117  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:09.727149  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:09.727248  115078 main.go:141] libmachine: (no-preload-989559) DBG | Using SSH client type: external
	I1206 19:56:09.727277  115078 main.go:141] libmachine: (no-preload-989559) DBG | Using SSH private key: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/no-preload-989559/id_rsa (-rw-------)
	I1206 19:56:09.727306  115078 main.go:141] libmachine: (no-preload-989559) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.5 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17740-63652/.minikube/machines/no-preload-989559/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1206 19:56:09.727317  115078 main.go:141] libmachine: (no-preload-989559) DBG | About to run SSH command:
	I1206 19:56:09.727361  115078 main.go:141] libmachine: (no-preload-989559) DBG | exit 0
	I1206 19:56:09.866040  115078 main.go:141] libmachine: (no-preload-989559) DBG | SSH cmd err, output: <nil>: 
	I1206 19:56:09.866443  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetConfigRaw
	I1206 19:56:09.867193  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetIP
	I1206 19:56:09.869892  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:09.870335  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:09.870374  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:09.870612  115078 profile.go:148] Saving config to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/no-preload-989559/config.json ...
	I1206 19:56:09.870870  115078 machine.go:88] provisioning docker machine ...
	I1206 19:56:09.870895  115078 main.go:141] libmachine: (no-preload-989559) Calling .DriverName
	I1206 19:56:09.871120  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetMachineName
	I1206 19:56:09.871299  115078 buildroot.go:166] provisioning hostname "no-preload-989559"
	I1206 19:56:09.871320  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetMachineName
	I1206 19:56:09.871471  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHHostname
	I1206 19:56:09.874146  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:09.874514  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:09.874554  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:09.874741  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHPort
	I1206 19:56:09.874943  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:09.875114  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:09.875258  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHUsername
	I1206 19:56:09.875412  115078 main.go:141] libmachine: Using SSH client type: native
	I1206 19:56:09.875921  115078 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I1206 19:56:09.875942  115078 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-989559 && echo "no-preload-989559" | sudo tee /etc/hostname
	I1206 19:56:10.017205  115078 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-989559
	
	I1206 19:56:10.017259  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHHostname
	I1206 19:56:10.020397  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:10.020843  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:10.020873  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:10.021040  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHPort
	I1206 19:56:10.021287  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:10.021450  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:10.021578  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHUsername
	I1206 19:56:10.021773  115078 main.go:141] libmachine: Using SSH client type: native
	I1206 19:56:10.022227  115078 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I1206 19:56:10.022255  115078 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-989559' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-989559/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-989559' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 19:56:10.160934  115078 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1206 19:56:10.161020  115078 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17740-63652/.minikube CaCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17740-63652/.minikube}
	I1206 19:56:10.161056  115078 buildroot.go:174] setting up certificates
	I1206 19:56:10.161072  115078 provision.go:83] configureAuth start
	I1206 19:56:10.161086  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetMachineName
	I1206 19:56:10.161464  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetIP
	I1206 19:56:10.164558  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:10.164956  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:10.165007  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:10.165246  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHHostname
	I1206 19:56:10.167911  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:10.168352  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:10.168412  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:10.168529  115078 provision.go:138] copyHostCerts
	I1206 19:56:10.168589  115078 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem, removing ...
	I1206 19:56:10.168612  115078 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem
	I1206 19:56:10.168675  115078 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/ca.pem (1082 bytes)
	I1206 19:56:10.168796  115078 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem, removing ...
	I1206 19:56:10.168811  115078 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem
	I1206 19:56:10.168844  115078 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/cert.pem (1123 bytes)
	I1206 19:56:10.168923  115078 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem, removing ...
	I1206 19:56:10.168962  115078 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem
	I1206 19:56:10.168990  115078 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17740-63652/.minikube/key.pem (1679 bytes)
	I1206 19:56:10.169062  115078 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem org=jenkins.no-preload-989559 san=[192.168.39.5 192.168.39.5 localhost 127.0.0.1 minikube no-preload-989559]
	I1206 19:56:10.266595  115078 provision.go:172] copyRemoteCerts
	I1206 19:56:10.266665  115078 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 19:56:10.266693  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHHostname
	I1206 19:56:10.269388  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:10.269786  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:10.269813  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:10.269987  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHPort
	I1206 19:56:10.270226  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:10.270390  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHUsername
	I1206 19:56:10.270536  115078 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/no-preload-989559/id_rsa Username:docker}
	I1206 19:56:10.362922  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 19:56:10.388165  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1206 19:56:10.412473  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 19:56:10.436804  115078 provision.go:86] duration metric: configureAuth took 275.714086ms
	I1206 19:56:10.436840  115078 buildroot.go:189] setting minikube options for container-runtime
	I1206 19:56:10.437076  115078 config.go:182] Loaded profile config "no-preload-989559": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.1
	I1206 19:56:10.437156  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHHostname
	I1206 19:56:10.439999  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:10.440419  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:10.440461  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:10.440567  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHPort
	I1206 19:56:10.440813  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:10.441006  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:10.441213  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHUsername
	I1206 19:56:10.441393  115078 main.go:141] libmachine: Using SSH client type: native
	I1206 19:56:10.441827  115078 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I1206 19:56:10.441844  115078 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 19:56:10.766695  115078 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 19:56:10.766726  115078 machine.go:91] provisioned docker machine in 895.840237ms
	I1206 19:56:10.766739  115078 start.go:300] post-start starting for "no-preload-989559" (driver="kvm2")
	I1206 19:56:10.766759  115078 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 19:56:10.766780  115078 main.go:141] libmachine: (no-preload-989559) Calling .DriverName
	I1206 19:56:10.767134  115078 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 19:56:10.767175  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHHostname
	I1206 19:56:10.770309  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:10.770704  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:10.770733  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:10.770881  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHPort
	I1206 19:56:10.771110  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:10.771247  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHUsername
	I1206 19:56:10.771414  115078 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/no-preload-989559/id_rsa Username:docker}
	I1206 19:56:10.869486  115078 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 19:56:10.874406  115078 info.go:137] Remote host: Buildroot 2021.02.12
	I1206 19:56:10.874433  115078 filesync.go:126] Scanning /home/jenkins/minikube-integration/17740-63652/.minikube/addons for local assets ...
	I1206 19:56:10.874502  115078 filesync.go:126] Scanning /home/jenkins/minikube-integration/17740-63652/.minikube/files for local assets ...
	I1206 19:56:10.874584  115078 filesync.go:149] local asset: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem -> 708342.pem in /etc/ssl/certs
	I1206 19:56:10.874684  115078 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 19:56:10.885837  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem --> /etc/ssl/certs/708342.pem (1708 bytes)
	I1206 19:56:10.910379  115078 start.go:303] post-start completed in 143.622076ms
	I1206 19:56:10.910406  115078 fix.go:56] fixHost completed within 24.423837205s
	I1206 19:56:10.910430  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHHostname
	I1206 19:56:10.913414  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:10.913887  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:10.913924  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:10.914062  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHPort
	I1206 19:56:10.914276  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:10.914430  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:10.914575  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHUsername
	I1206 19:56:10.914741  115078 main.go:141] libmachine: Using SSH client type: native
	I1206 19:56:10.915078  115078 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I1206 19:56:10.915096  115078 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1206 19:56:06.672320  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:09.170277  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:11.173418  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:11.046393  115078 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701892571.030057611
	
	I1206 19:56:11.046418  115078 fix.go:206] guest clock: 1701892571.030057611
	I1206 19:56:11.046427  115078 fix.go:219] Guest: 2023-12-06 19:56:11.030057611 +0000 UTC Remote: 2023-12-06 19:56:10.910410702 +0000 UTC m=+364.968252500 (delta=119.646909ms)
	I1206 19:56:11.046452  115078 fix.go:190] guest clock delta is within tolerance: 119.646909ms
	I1206 19:56:11.046460  115078 start.go:83] releasing machines lock for "no-preload-989559", held for 24.559924375s
	I1206 19:56:11.046485  115078 main.go:141] libmachine: (no-preload-989559) Calling .DriverName
	I1206 19:56:11.046751  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetIP
	I1206 19:56:11.049522  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:11.049918  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:11.049958  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:11.050160  115078 main.go:141] libmachine: (no-preload-989559) Calling .DriverName
	I1206 19:56:11.050715  115078 main.go:141] libmachine: (no-preload-989559) Calling .DriverName
	I1206 19:56:11.050932  115078 main.go:141] libmachine: (no-preload-989559) Calling .DriverName
	I1206 19:56:11.051010  115078 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 19:56:11.051063  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHHostname
	I1206 19:56:11.051201  115078 ssh_runner.go:195] Run: cat /version.json
	I1206 19:56:11.051234  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHHostname
	I1206 19:56:11.054142  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:11.054342  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:11.054556  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:11.054587  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:11.054711  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHPort
	I1206 19:56:11.054925  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:11.054930  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:11.054950  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:11.055054  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHPort
	I1206 19:56:11.055147  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHUsername
	I1206 19:56:11.055316  115078 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/no-preload-989559/id_rsa Username:docker}
	I1206 19:56:11.055338  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:11.055483  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHUsername
	I1206 19:56:11.055605  115078 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/no-preload-989559/id_rsa Username:docker}
	I1206 19:56:11.180256  115078 ssh_runner.go:195] Run: systemctl --version
	I1206 19:56:11.186702  115078 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 19:56:11.339386  115078 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 19:56:11.346262  115078 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 19:56:11.346364  115078 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 19:56:11.362865  115078 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1206 19:56:11.362902  115078 start.go:475] detecting cgroup driver to use...
	I1206 19:56:11.362988  115078 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 19:56:11.383636  115078 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 19:56:11.397157  115078 docker.go:203] disabling cri-docker service (if available) ...
	I1206 19:56:11.397264  115078 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 19:56:11.411338  115078 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 19:56:11.425762  115078 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 19:56:11.560730  115078 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 19:56:11.708633  115078 docker.go:219] disabling docker service ...
	I1206 19:56:11.708713  115078 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 19:56:11.723172  115078 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 19:56:11.737032  115078 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 19:56:11.851037  115078 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 19:56:11.969321  115078 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 19:56:11.982745  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 19:56:12.003130  115078 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1206 19:56:12.003215  115078 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:56:12.013345  115078 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 19:56:12.013428  115078 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:56:12.023765  115078 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:56:12.034114  115078 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 19:56:12.044159  115078 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 19:56:12.054135  115078 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 19:56:12.062781  115078 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1206 19:56:12.062861  115078 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1206 19:56:12.076322  115078 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 19:56:12.085924  115078 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 19:56:12.216360  115078 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 19:56:12.409482  115078 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 19:56:12.409550  115078 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 19:56:12.417063  115078 start.go:543] Will wait 60s for crictl version
	I1206 19:56:12.417135  115078 ssh_runner.go:195] Run: which crictl
	I1206 19:56:12.422177  115078 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1206 19:56:12.474340  115078 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1206 19:56:12.474449  115078 ssh_runner.go:195] Run: crio --version
	I1206 19:56:12.538091  115078 ssh_runner.go:195] Run: crio --version
	I1206 19:56:12.604444  115078 out.go:177] * Preparing Kubernetes v1.29.0-rc.1 on CRI-O 1.24.1 ...
	I1206 19:56:12.144887  115591 api_server.go:279] https://192.168.50.164:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1206 19:56:12.144921  115591 api_server.go:103] status: https://192.168.50.164:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1206 19:56:12.144936  115591 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8443/healthz ...
	I1206 19:56:12.179318  115591 api_server.go:279] https://192.168.50.164:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1206 19:56:12.179366  115591 api_server.go:103] status: https://192.168.50.164:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1206 19:56:12.679803  115591 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8443/healthz ...
	I1206 19:56:12.694412  115591 api_server.go:279] https://192.168.50.164:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1206 19:56:12.694449  115591 api_server.go:103] status: https://192.168.50.164:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1206 19:56:13.179503  115591 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8443/healthz ...
	I1206 19:56:13.193118  115591 api_server.go:279] https://192.168.50.164:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1206 19:56:13.193161  115591 api_server.go:103] status: https://192.168.50.164:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1206 19:56:13.679759  115591 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8443/healthz ...
	I1206 19:56:13.685603  115591 api_server.go:279] https://192.168.50.164:8443/healthz returned 200:
	ok
	I1206 19:56:13.694792  115591 api_server.go:141] control plane version: v1.28.4
	I1206 19:56:13.694831  115591 api_server.go:131] duration metric: took 4.931941572s to wait for apiserver health ...
	I1206 19:56:13.694843  115591 cni.go:84] Creating CNI manager for ""
	I1206 19:56:13.694852  115591 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 19:56:13.697042  115591 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1206 19:56:13.698653  115591 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1206 19:56:13.712991  115591 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1206 19:56:13.734001  115591 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 19:56:13.761962  115591 system_pods.go:59] 8 kube-system pods found
	I1206 19:56:13.762001  115591 system_pods.go:61] "coredns-5dd5756b68-cpst4" [e7d8324e-8468-470c-b532-1f09ee805bab] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 19:56:13.762022  115591 system_pods.go:61] "etcd-embed-certs-209025" [eeb81149-8e43-4efe-b977-e8f84c7a7c57] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1206 19:56:13.762032  115591 system_pods.go:61] "kube-apiserver-embed-certs-209025" [b64e228d-4921-4e35-b80c-343f8519076e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1206 19:56:13.762041  115591 system_pods.go:61] "kube-controller-manager-embed-certs-209025" [2206d849-0724-42c9-b5c4-4d84c3cafce4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 19:56:13.762053  115591 system_pods.go:61] "kube-proxy-pt8nj" [b7cffe6a-4401-40e0-8056-68452e15b57c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1206 19:56:13.762068  115591 system_pods.go:61] "kube-scheduler-embed-certs-209025" [88ae7a94-a1bc-463a-9253-5f308ec1755e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1206 19:56:13.762077  115591 system_pods.go:61] "metrics-server-57f55c9bc5-dr9k8" [0dbe18a4-d30d-4882-b188-b0d1f1b1d711] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 19:56:13.762092  115591 system_pods.go:61] "storage-provisioner" [afebf144-9062-4b43-a491-9eecd5ab6c10] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 19:56:13.762109  115591 system_pods.go:74] duration metric: took 28.078588ms to wait for pod list to return data ...
	I1206 19:56:13.762120  115591 node_conditions.go:102] verifying NodePressure condition ...
	I1206 19:56:13.773614  115591 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1206 19:56:13.773646  115591 node_conditions.go:123] node cpu capacity is 2
	I1206 19:56:13.773657  115591 node_conditions.go:105] duration metric: took 11.528993ms to run NodePressure ...
	I1206 19:56:13.773678  115591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:56:14.157761  115591 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1206 19:56:14.169588  115591 kubeadm.go:787] kubelet initialised
	I1206 19:56:14.169632  115591 kubeadm.go:788] duration metric: took 11.756226ms waiting for restarted kubelet to initialise ...
	I1206 19:56:14.169644  115591 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 19:56:14.186031  115591 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-cpst4" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:14.211717  115591 pod_ready.go:97] node "embed-certs-209025" hosting pod "coredns-5dd5756b68-cpst4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-209025" has status "Ready":"False"
	I1206 19:56:14.211747  115591 pod_ready.go:81] duration metric: took 25.681607ms waiting for pod "coredns-5dd5756b68-cpst4" in "kube-system" namespace to be "Ready" ...
	E1206 19:56:14.211759  115591 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-209025" hosting pod "coredns-5dd5756b68-cpst4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-209025" has status "Ready":"False"
	I1206 19:56:14.211769  115591 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:14.219369  115591 pod_ready.go:97] node "embed-certs-209025" hosting pod "etcd-embed-certs-209025" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-209025" has status "Ready":"False"
	I1206 19:56:14.219396  115591 pod_ready.go:81] duration metric: took 7.594898ms waiting for pod "etcd-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	E1206 19:56:14.219408  115591 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-209025" hosting pod "etcd-embed-certs-209025" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-209025" has status "Ready":"False"
	I1206 19:56:14.219425  115591 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:14.233417  115591 pod_ready.go:97] node "embed-certs-209025" hosting pod "kube-apiserver-embed-certs-209025" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-209025" has status "Ready":"False"
	I1206 19:56:14.233513  115591 pod_ready.go:81] duration metric: took 14.073312ms waiting for pod "kube-apiserver-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	E1206 19:56:14.233535  115591 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-209025" hosting pod "kube-apiserver-embed-certs-209025" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-209025" has status "Ready":"False"
	I1206 19:56:14.233546  115591 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:14.244480  115591 pod_ready.go:97] node "embed-certs-209025" hosting pod "kube-controller-manager-embed-certs-209025" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-209025" has status "Ready":"False"
	I1206 19:56:14.244516  115591 pod_ready.go:81] duration metric: took 10.958431ms waiting for pod "kube-controller-manager-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	E1206 19:56:14.244530  115591 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-209025" hosting pod "kube-controller-manager-embed-certs-209025" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-209025" has status "Ready":"False"
	I1206 19:56:14.244537  115591 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-pt8nj" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:12.606102  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetIP
	I1206 19:56:12.609040  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:12.609395  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:12.609436  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:12.609665  115078 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1206 19:56:12.615279  115078 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 19:56:12.629571  115078 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime crio
	I1206 19:56:12.629641  115078 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 19:56:12.674728  115078 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.1". assuming images are not preloaded.
	I1206 19:56:12.674763  115078 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.1 registry.k8s.io/kube-controller-manager:v1.29.0-rc.1 registry.k8s.io/kube-scheduler:v1.29.0-rc.1 registry.k8s.io/kube-proxy:v1.29.0-rc.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1206 19:56:12.674870  115078 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 19:56:12.674886  115078 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1206 19:56:12.674910  115078 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I1206 19:56:12.674923  115078 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1206 19:56:12.674965  115078 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1206 19:56:12.674885  115078 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1206 19:56:12.674998  115078 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I1206 19:56:12.674889  115078 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1206 19:56:12.676510  115078 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 19:56:12.676539  115078 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1206 19:56:12.676563  115078 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1206 19:56:12.676576  115078 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1206 19:56:12.676511  115078 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I1206 19:56:12.676599  115078 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I1206 19:56:12.676624  115078 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1206 19:56:12.676642  115078 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1206 19:56:12.862606  115078 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1206 19:56:12.882993  115078 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I1206 19:56:12.884387  115078 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I1206 19:56:12.900149  115078 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 19:56:12.909389  115078 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1206 19:56:12.916391  115078 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1206 19:56:12.924669  115078 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1206 19:56:12.946885  115078 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1206 19:56:13.028628  115078 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I1206 19:56:13.028685  115078 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I1206 19:56:13.028741  115078 ssh_runner.go:195] Run: which crictl
	I1206 19:56:13.095076  115078 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I1206 19:56:13.095139  115078 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I1206 19:56:13.095289  115078 ssh_runner.go:195] Run: which crictl
	I1206 19:56:13.136956  115078 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.1" does not exist at hash "b953575c86991f5317a5f4b7e3f43c8a43d771ca400d96ba473f43c9a1ab3542" in container runtime
	I1206 19:56:13.137003  115078 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1206 19:56:13.137074  115078 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 19:56:13.137130  115078 ssh_runner.go:195] Run: which crictl
	I1206 19:56:13.137005  115078 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1206 19:56:13.137268  115078 ssh_runner.go:195] Run: which crictl
	I1206 19:56:13.146913  115078 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.1" does not exist at hash "b7918a1eaa37ee8f677a10278009aa059854630785fbb7306140641494aeec09" in container runtime
	I1206 19:56:13.146970  115078 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1206 19:56:13.147024  115078 ssh_runner.go:195] Run: which crictl
	I1206 19:56:13.159866  115078 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.1" does not exist at hash "86e0ea23640eb90027102f4e4ff188211c290910bcd9af7402fd429d43d281ff" in container runtime
	I1206 19:56:13.159913  115078 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1206 19:56:13.159963  115078 ssh_runner.go:195] Run: which crictl
	I1206 19:56:13.162288  115078 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I1206 19:56:13.162330  115078 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.1" does not exist at hash "5c4a644510d6f168869cf0620dc09abd02a6fe054538f92a12c7fd7365ffb956" in container runtime
	I1206 19:56:13.162375  115078 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I1206 19:56:13.162378  115078 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1206 19:56:13.162399  115078 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 19:56:13.162407  115078 ssh_runner.go:195] Run: which crictl
	I1206 19:56:13.162523  115078 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1206 19:56:13.162523  115078 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1206 19:56:13.165637  115078 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1206 19:56:13.319155  115078 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I1206 19:56:13.319253  115078 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1206 19:56:13.319274  115078 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I1206 19:56:13.319300  115078 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1206 19:56:13.319371  115078 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.1
	I1206 19:56:13.319394  115078 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1206 19:56:13.319405  115078 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I1206 19:56:13.319423  115078 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1
	I1206 19:56:13.319472  115078 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I1206 19:56:13.319495  115078 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.1
	I1206 19:56:13.319545  115078 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.1
	I1206 19:56:13.319621  115078 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1
	I1206 19:56:13.319546  115078 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1
	I1206 19:56:13.376009  115078 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1 (exists)
	I1206 19:56:13.376036  115078 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1
	I1206 19:56:13.376100  115078 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1
	I1206 19:56:13.376145  115078 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I1206 19:56:13.376179  115078 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1206 19:56:13.376217  115078 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I1206 19:56:13.376273  115078 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1 (exists)
	I1206 19:56:13.376302  115078 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1 (exists)
	I1206 19:56:13.376329  115078 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.1
	I1206 19:56:13.376423  115078 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1
	I1206 19:56:15.530421  115078 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1: (2.153965348s)
	I1206 19:56:15.530466  115078 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1 (exists)
	I1206 19:56:15.530502  115078 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1: (2.154372843s)
	I1206 19:56:15.530536  115078 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.1 from cache
	I1206 19:56:15.530571  115078 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I1206 19:56:15.530630  115078 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I1206 19:56:13.177508  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:15.671903  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:14.963353  115591 pod_ready.go:92] pod "kube-proxy-pt8nj" in "kube-system" namespace has status "Ready":"True"
	I1206 19:56:14.963382  115591 pod_ready.go:81] duration metric: took 718.835702ms waiting for pod "kube-proxy-pt8nj" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:14.963391  115591 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:17.284373  115591 pod_ready.go:102] pod "kube-scheduler-embed-certs-209025" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:19.354814  115078 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.824152707s)
	I1206 19:56:19.354846  115078 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I1206 19:56:19.354874  115078 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1206 19:56:19.354924  115078 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1206 19:56:20.402300  115078 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.047341059s)
	I1206 19:56:20.402334  115078 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1206 19:56:20.402378  115078 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I1206 19:56:20.402442  115078 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I1206 19:56:17.672489  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:20.171526  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:19.771500  115591 pod_ready.go:102] pod "kube-scheduler-embed-certs-209025" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:22.273627  115591 pod_ready.go:102] pod "kube-scheduler-embed-certs-209025" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:23.269993  115591 pod_ready.go:92] pod "kube-scheduler-embed-certs-209025" in "kube-system" namespace has status "Ready":"True"
	I1206 19:56:23.270019  115591 pod_ready.go:81] duration metric: took 8.306621129s waiting for pod "kube-scheduler-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:23.270029  115591 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:22.575204  115078 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.17273177s)
	I1206 19:56:22.575240  115078 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I1206 19:56:22.575270  115078 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1
	I1206 19:56:22.575318  115078 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1
	I1206 19:56:25.335616  115078 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1: (2.760267154s)
	I1206 19:56:25.335646  115078 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.1 from cache
	I1206 19:56:25.335680  115078 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1
	I1206 19:56:25.335760  115078 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1
	I1206 19:56:22.175410  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:24.677136  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:23.486162  115217 kubeadm.go:787] kubelet initialised
	I1206 19:56:23.486192  115217 kubeadm.go:788] duration metric: took 47.560169603s waiting for restarted kubelet to initialise ...
	I1206 19:56:23.486203  115217 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 19:56:23.491797  115217 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-85xcj" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:23.499126  115217 pod_ready.go:92] pod "coredns-5644d7b6d9-85xcj" in "kube-system" namespace has status "Ready":"True"
	I1206 19:56:23.499149  115217 pod_ready.go:81] duration metric: took 7.327003ms waiting for pod "coredns-5644d7b6d9-85xcj" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:23.499160  115217 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-nrtk9" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:23.503979  115217 pod_ready.go:92] pod "coredns-5644d7b6d9-nrtk9" in "kube-system" namespace has status "Ready":"True"
	I1206 19:56:23.504002  115217 pod_ready.go:81] duration metric: took 4.834039ms waiting for pod "coredns-5644d7b6d9-nrtk9" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:23.504014  115217 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-448851" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:23.509110  115217 pod_ready.go:92] pod "etcd-old-k8s-version-448851" in "kube-system" namespace has status "Ready":"True"
	I1206 19:56:23.509132  115217 pod_ready.go:81] duration metric: took 5.109845ms waiting for pod "etcd-old-k8s-version-448851" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:23.509153  115217 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-448851" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:23.514641  115217 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-448851" in "kube-system" namespace has status "Ready":"True"
	I1206 19:56:23.514665  115217 pod_ready.go:81] duration metric: took 5.502762ms waiting for pod "kube-apiserver-old-k8s-version-448851" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:23.514677  115217 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-448851" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:23.886694  115217 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-448851" in "kube-system" namespace has status "Ready":"True"
	I1206 19:56:23.886726  115217 pod_ready.go:81] duration metric: took 372.040617ms waiting for pod "kube-controller-manager-old-k8s-version-448851" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:23.886741  115217 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-sw4qv" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:24.287638  115217 pod_ready.go:92] pod "kube-proxy-sw4qv" in "kube-system" namespace has status "Ready":"True"
	I1206 19:56:24.287662  115217 pod_ready.go:81] duration metric: took 400.914693ms waiting for pod "kube-proxy-sw4qv" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:24.287673  115217 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-448851" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:24.688298  115217 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-448851" in "kube-system" namespace has status "Ready":"True"
	I1206 19:56:24.688328  115217 pod_ready.go:81] duration metric: took 400.645544ms waiting for pod "kube-scheduler-old-k8s-version-448851" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:24.688343  115217 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:26.991669  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:25.288536  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:27.290135  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:29.291318  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:27.610095  115078 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1: (2.274298339s)
	I1206 19:56:27.610132  115078 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.1 from cache
	I1206 19:56:27.610163  115078 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1
	I1206 19:56:27.610219  115078 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1
	I1206 19:56:30.272712  115078 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1: (2.662458967s)
	I1206 19:56:30.272746  115078 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17740-63652/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.1 from cache
	I1206 19:56:30.272782  115078 cache_images.go:123] Successfully loaded all cached images
	I1206 19:56:30.272789  115078 cache_images.go:92] LoadImages completed in 17.598011028s
	I1206 19:56:30.272883  115078 ssh_runner.go:195] Run: crio config
	I1206 19:56:30.341321  115078 cni.go:84] Creating CNI manager for ""
	I1206 19:56:30.341346  115078 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 19:56:30.341368  115078 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1206 19:56:30.341392  115078 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.5 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-989559 NodeName:no-preload-989559 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 19:56:30.341597  115078 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-989559"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 19:56:30.341693  115078 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-989559 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.1 ClusterName:no-preload-989559 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1206 19:56:30.341758  115078 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.1
	I1206 19:56:30.351650  115078 binaries.go:44] Found k8s binaries, skipping transfer
	I1206 19:56:30.351729  115078 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 19:56:30.360413  115078 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1206 19:56:30.376399  115078 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1206 19:56:30.392522  115078 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I1206 19:56:30.409313  115078 ssh_runner.go:195] Run: grep 192.168.39.5	control-plane.minikube.internal$ /etc/hosts
	I1206 19:56:30.413355  115078 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.5	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 19:56:30.426797  115078 certs.go:56] Setting up /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/no-preload-989559 for IP: 192.168.39.5
	I1206 19:56:30.426854  115078 certs.go:190] acquiring lock for shared ca certs: {Name:mkf8fbf7b590617ef4dc6c3a4acb742ae26f89ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 19:56:30.427070  115078 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17740-63652/.minikube/ca.key
	I1206 19:56:30.427134  115078 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.key
	I1206 19:56:30.427240  115078 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/no-preload-989559/client.key
	I1206 19:56:30.427311  115078 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/no-preload-989559/apiserver.key.c9b343a5
	I1206 19:56:30.427350  115078 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/no-preload-989559/proxy-client.key
	I1206 19:56:30.427454  115078 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834.pem (1338 bytes)
	W1206 19:56:30.427508  115078 certs.go:433] ignoring /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834_empty.pem, impossibly tiny 0 bytes
	I1206 19:56:30.427521  115078 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 19:56:30.427550  115078 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/ca.pem (1082 bytes)
	I1206 19:56:30.427571  115078 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/cert.pem (1123 bytes)
	I1206 19:56:30.427593  115078 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/certs/home/jenkins/minikube-integration/17740-63652/.minikube/certs/key.pem (1679 bytes)
	I1206 19:56:30.427634  115078 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem (1708 bytes)
	I1206 19:56:30.428313  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/no-preload-989559/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1206 19:56:30.452268  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/no-preload-989559/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 19:56:30.476793  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/no-preload-989559/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 19:56:30.503751  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/no-preload-989559/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 19:56:30.530680  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 19:56:30.557770  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 19:56:30.582244  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 19:56:30.608096  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 19:56:30.634585  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/certs/70834.pem --> /usr/share/ca-certificates/70834.pem (1338 bytes)
	I1206 19:56:30.660669  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/ssl/certs/708342.pem --> /usr/share/ca-certificates/708342.pem (1708 bytes)
	I1206 19:56:30.686987  115078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-63652/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 19:56:30.711098  115078 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 19:56:30.727576  115078 ssh_runner.go:195] Run: openssl version
	I1206 19:56:30.733568  115078 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/708342.pem && ln -fs /usr/share/ca-certificates/708342.pem /etc/ssl/certs/708342.pem"
	I1206 19:56:30.743777  115078 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/708342.pem
	I1206 19:56:30.748976  115078 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  6 18:50 /usr/share/ca-certificates/708342.pem
	I1206 19:56:30.749033  115078 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/708342.pem
	I1206 19:56:30.755465  115078 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/708342.pem /etc/ssl/certs/3ec20f2e.0"
	I1206 19:56:30.766285  115078 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1206 19:56:30.777164  115078 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:56:30.782160  115078 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  6 18:41 /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:56:30.782228  115078 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:56:30.789394  115078 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1206 19:56:30.801293  115078 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/70834.pem && ln -fs /usr/share/ca-certificates/70834.pem /etc/ssl/certs/70834.pem"
	I1206 19:56:30.812646  115078 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/70834.pem
	I1206 19:56:30.818147  115078 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  6 18:50 /usr/share/ca-certificates/70834.pem
	I1206 19:56:30.818209  115078 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/70834.pem
	I1206 19:56:30.824161  115078 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/70834.pem /etc/ssl/certs/51391683.0"
	I1206 19:56:30.834389  115078 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1206 19:56:30.839518  115078 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 19:56:30.845997  115078 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 19:56:30.852229  115078 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 19:56:30.858622  115078 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 19:56:30.864675  115078 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 19:56:30.870945  115078 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 19:56:30.878301  115078 kubeadm.go:404] StartCluster: {Name:no-preload-989559 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.1 ClusterName:no-preload-989559 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.5 Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 19:56:30.878438  115078 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 19:56:30.878513  115078 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 19:56:30.921588  115078 cri.go:89] found id: ""
	I1206 19:56:30.921692  115078 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 19:56:30.932160  115078 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1206 19:56:30.932190  115078 kubeadm.go:636] restartCluster start
	I1206 19:56:30.932264  115078 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 19:56:30.942019  115078 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:30.943237  115078 kubeconfig.go:92] found "no-preload-989559" server: "https://192.168.39.5:8443"
	I1206 19:56:30.945618  115078 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 19:56:30.954582  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:30.954655  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:30.966532  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:30.966555  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:30.966602  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:30.979930  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:27.172625  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:29.671318  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:28.992218  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:30.994420  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:31.786922  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:33.787251  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:31.480021  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:31.480135  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:31.493287  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:31.980317  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:31.980409  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:31.994348  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:32.480929  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:32.481020  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:32.494940  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:32.980449  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:32.980559  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:32.993316  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:33.481040  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:33.481150  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:33.494210  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:33.980837  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:33.980936  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:33.994280  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:34.480389  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:34.480492  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:34.493915  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:34.980458  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:34.980569  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:34.994306  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:35.480788  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:35.480897  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:35.495397  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:35.980815  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:35.980919  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:32.171889  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:34.669989  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:33.491932  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:35.492626  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:37.991389  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:35.787950  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:38.288581  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	W1206 19:56:35.994848  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:36.480833  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:36.480959  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:36.496053  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:36.980074  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:36.980197  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:36.994615  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:37.480110  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:37.480203  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:37.494380  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:37.980922  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:37.981009  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:37.994865  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:38.480432  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:38.480536  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:38.494938  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:38.980148  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:38.980250  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:38.995427  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:39.481067  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:39.481153  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:39.494631  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:39.980142  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:39.980255  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:39.991638  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:40.480132  115078 api_server.go:166] Checking apiserver status ...
	I1206 19:56:40.480205  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1206 19:56:40.492507  115078 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1206 19:56:40.955413  115078 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1206 19:56:40.955478  115078 kubeadm.go:1135] stopping kube-system containers ...
	I1206 19:56:40.955492  115078 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1206 19:56:40.955574  115078 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 19:56:36.673986  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:39.172561  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:41.177049  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:40.490976  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:42.492210  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:40.293997  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:42.789693  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:40.997724  115078 cri.go:89] found id: ""
	I1206 19:56:40.997783  115078 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1206 19:56:41.013137  115078 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 19:56:41.021612  115078 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 19:56:41.021667  115078 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 19:56:41.030846  115078 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1206 19:56:41.030878  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:56:41.160850  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:56:42.395616  115078 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.234715721s)
	I1206 19:56:42.395651  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:56:42.595187  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:56:42.688245  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:56:42.769464  115078 api_server.go:52] waiting for apiserver process to appear ...
	I1206 19:56:42.769566  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:42.783010  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:43.303551  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:43.803070  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:44.303922  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:44.803326  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:45.302954  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:56:45.323804  115078 api_server.go:72] duration metric: took 2.55435107s to wait for apiserver process to appear ...
	I1206 19:56:45.323839  115078 api_server.go:88] waiting for apiserver healthz status ...
	I1206 19:56:45.323865  115078 api_server.go:253] Checking apiserver healthz at https://192.168.39.5:8443/healthz ...
	I1206 19:56:45.324588  115078 api_server.go:269] stopped: https://192.168.39.5:8443/healthz: Get "https://192.168.39.5:8443/healthz": dial tcp 192.168.39.5:8443: connect: connection refused
	I1206 19:56:45.324632  115078 api_server.go:253] Checking apiserver healthz at https://192.168.39.5:8443/healthz ...
	I1206 19:56:45.325115  115078 api_server.go:269] stopped: https://192.168.39.5:8443/healthz: Get "https://192.168.39.5:8443/healthz": dial tcp 192.168.39.5:8443: connect: connection refused
	I1206 19:56:45.825883  115078 api_server.go:253] Checking apiserver healthz at https://192.168.39.5:8443/healthz ...
	I1206 19:56:43.670089  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:45.670833  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:44.994670  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:47.492548  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:45.288109  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:47.788636  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:49.759033  115078 api_server.go:279] https://192.168.39.5:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1206 19:56:49.759089  115078 api_server.go:103] status: https://192.168.39.5:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1206 19:56:49.759117  115078 api_server.go:253] Checking apiserver healthz at https://192.168.39.5:8443/healthz ...
	I1206 19:56:49.778467  115078 api_server.go:279] https://192.168.39.5:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1206 19:56:49.778502  115078 api_server.go:103] status: https://192.168.39.5:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1206 19:56:49.825793  115078 api_server.go:253] Checking apiserver healthz at https://192.168.39.5:8443/healthz ...
	I1206 19:56:49.888751  115078 api_server.go:279] https://192.168.39.5:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1206 19:56:49.888801  115078 api_server.go:103] status: https://192.168.39.5:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1206 19:56:50.325211  115078 api_server.go:253] Checking apiserver healthz at https://192.168.39.5:8443/healthz ...
	I1206 19:56:50.330395  115078 api_server.go:279] https://192.168.39.5:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1206 19:56:50.330438  115078 api_server.go:103] status: https://192.168.39.5:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1206 19:56:50.826038  115078 api_server.go:253] Checking apiserver healthz at https://192.168.39.5:8443/healthz ...
	I1206 19:56:50.830801  115078 api_server.go:279] https://192.168.39.5:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1206 19:56:50.830836  115078 api_server.go:103] status: https://192.168.39.5:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1206 19:56:51.325298  115078 api_server.go:253] Checking apiserver healthz at https://192.168.39.5:8443/healthz ...
	I1206 19:56:51.331295  115078 api_server.go:279] https://192.168.39.5:8443/healthz returned 200:
	ok
	I1206 19:56:51.340412  115078 api_server.go:141] control plane version: v1.29.0-rc.1
	I1206 19:56:51.340445  115078 api_server.go:131] duration metric: took 6.016598018s to wait for apiserver health ...
	I1206 19:56:51.340457  115078 cni.go:84] Creating CNI manager for ""
	I1206 19:56:51.340465  115078 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 19:56:51.383227  115078 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1206 19:56:47.671090  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:50.173835  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:49.494360  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:51.991886  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:51.385027  115078 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1206 19:56:51.399942  115078 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1206 19:56:51.422533  115078 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 19:56:51.446615  115078 system_pods.go:59] 8 kube-system pods found
	I1206 19:56:51.446661  115078 system_pods.go:61] "coredns-76f75df574-h9pkz" [05501356-bf9b-4a99-a1b9-40d0caef38db] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 19:56:51.446671  115078 system_pods.go:61] "etcd-no-preload-989559" [6c1cb748-a6a8-4583-b8fd-adf37e05b771] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1206 19:56:51.446684  115078 system_pods.go:61] "kube-apiserver-no-preload-989559" [51d8b7c6-0cef-4832-96b2-5040c0725310] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1206 19:56:51.446698  115078 system_pods.go:61] "kube-controller-manager-no-preload-989559" [cc8dfb88-9990-488f-9150-5c643143dcf1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 19:56:51.446707  115078 system_pods.go:61] "kube-proxy-zgqvt" [550b2491-c14f-47c4-82d5-1301fa351305] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1206 19:56:51.446716  115078 system_pods.go:61] "kube-scheduler-no-preload-989559" [53a5031e-51aa-4867-88ff-1c7972a0cfa7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1206 19:56:51.446731  115078 system_pods.go:61] "metrics-server-57f55c9bc5-vz7qc" [97c1bcd2-eabc-4029-bb02-5bbfd4d96c0f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 19:56:51.446739  115078 system_pods.go:61] "storage-provisioner" [c4d98de3-12ec-47f6-a6a6-f1dc61b479be] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 19:56:51.446749  115078 system_pods.go:74] duration metric: took 24.188803ms to wait for pod list to return data ...
	I1206 19:56:51.446758  115078 node_conditions.go:102] verifying NodePressure condition ...
	I1206 19:56:51.452770  115078 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1206 19:56:51.452803  115078 node_conditions.go:123] node cpu capacity is 2
	I1206 19:56:51.452817  115078 node_conditions.go:105] duration metric: took 6.05327ms to run NodePressure ...
	I1206 19:56:51.452840  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 19:56:51.740786  115078 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1206 19:56:51.746512  115078 kubeadm.go:787] kubelet initialised
	I1206 19:56:51.746541  115078 kubeadm.go:788] duration metric: took 5.720787ms waiting for restarted kubelet to initialise ...
	I1206 19:56:51.746550  115078 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 19:56:51.752751  115078 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-h9pkz" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:51.761003  115078 pod_ready.go:97] node "no-preload-989559" hosting pod "coredns-76f75df574-h9pkz" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:51.761032  115078 pod_ready.go:81] duration metric: took 8.254381ms waiting for pod "coredns-76f75df574-h9pkz" in "kube-system" namespace to be "Ready" ...
	E1206 19:56:51.761043  115078 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-989559" hosting pod "coredns-76f75df574-h9pkz" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:51.761052  115078 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:51.766223  115078 pod_ready.go:97] node "no-preload-989559" hosting pod "etcd-no-preload-989559" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:51.766248  115078 pod_ready.go:81] duration metric: took 5.184525ms waiting for pod "etcd-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	E1206 19:56:51.766259  115078 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-989559" hosting pod "etcd-no-preload-989559" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:51.766271  115078 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:51.771516  115078 pod_ready.go:97] node "no-preload-989559" hosting pod "kube-apiserver-no-preload-989559" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:51.771541  115078 pod_ready.go:81] duration metric: took 5.262069ms waiting for pod "kube-apiserver-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	E1206 19:56:51.771552  115078 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-989559" hosting pod "kube-apiserver-no-preload-989559" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:51.771561  115078 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:51.827774  115078 pod_ready.go:97] node "no-preload-989559" hosting pod "kube-controller-manager-no-preload-989559" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:51.827804  115078 pod_ready.go:81] duration metric: took 56.232455ms waiting for pod "kube-controller-manager-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	E1206 19:56:51.827818  115078 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-989559" hosting pod "kube-controller-manager-no-preload-989559" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:51.827826  115078 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zgqvt" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:52.231699  115078 pod_ready.go:97] node "no-preload-989559" hosting pod "kube-proxy-zgqvt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:52.231761  115078 pod_ready.go:81] duration metric: took 403.922333ms waiting for pod "kube-proxy-zgqvt" in "kube-system" namespace to be "Ready" ...
	E1206 19:56:52.231774  115078 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-989559" hosting pod "kube-proxy-zgqvt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:52.231790  115078 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:52.626827  115078 pod_ready.go:97] node "no-preload-989559" hosting pod "kube-scheduler-no-preload-989559" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:52.626863  115078 pod_ready.go:81] duration metric: took 395.06457ms waiting for pod "kube-scheduler-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	E1206 19:56:52.626877  115078 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-989559" hosting pod "kube-scheduler-no-preload-989559" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:52.626889  115078 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:53.028166  115078 pod_ready.go:97] node "no-preload-989559" hosting pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:53.028201  115078 pod_ready.go:81] duration metric: took 401.294916ms waiting for pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace to be "Ready" ...
	E1206 19:56:53.028214  115078 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-989559" hosting pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:53.028226  115078 pod_ready.go:38] duration metric: took 1.281664253s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 19:56:53.028249  115078 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 19:56:53.057673  115078 ops.go:34] apiserver oom_adj: -16
	I1206 19:56:53.057706  115078 kubeadm.go:640] restartCluster took 22.12550727s
	I1206 19:56:53.057718  115078 kubeadm.go:406] StartCluster complete in 22.179430573s
	I1206 19:56:53.057756  115078 settings.go:142] acquiring lock: {Name:mkfeb988d43ca5824ac2b3af603600358ae0dd6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 19:56:53.057857  115078 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17740-63652/kubeconfig
	I1206 19:56:53.059885  115078 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-63652/kubeconfig: {Name:mkb891a2b2c86b4a1b0f4bb8fd4e51233eb9c683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 19:56:53.060125  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 19:56:53.060244  115078 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1206 19:56:53.060337  115078 addons.go:69] Setting storage-provisioner=true in profile "no-preload-989559"
	I1206 19:56:53.060364  115078 addons.go:231] Setting addon storage-provisioner=true in "no-preload-989559"
	I1206 19:56:53.060367  115078 config.go:182] Loaded profile config "no-preload-989559": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.1
	W1206 19:56:53.060375  115078 addons.go:240] addon storage-provisioner should already be in state true
	I1206 19:56:53.060404  115078 addons.go:69] Setting default-storageclass=true in profile "no-preload-989559"
	I1206 19:56:53.060415  115078 addons.go:69] Setting metrics-server=true in profile "no-preload-989559"
	I1206 19:56:53.060430  115078 addons.go:231] Setting addon metrics-server=true in "no-preload-989559"
	I1206 19:56:53.060433  115078 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-989559"
	W1206 19:56:53.060440  115078 addons.go:240] addon metrics-server should already be in state true
	I1206 19:56:53.060452  115078 host.go:66] Checking if "no-preload-989559" exists ...
	I1206 19:56:53.060472  115078 host.go:66] Checking if "no-preload-989559" exists ...
	I1206 19:56:53.060856  115078 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:56:53.060865  115078 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:56:53.060889  115078 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:56:53.060894  115078 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:56:53.060917  115078 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:56:53.060894  115078 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:56:53.065950  115078 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-989559" context rescaled to 1 replicas
	I1206 19:56:53.065992  115078 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.5 Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 19:56:53.068038  115078 out.go:177] * Verifying Kubernetes components...
	I1206 19:56:53.069775  115078 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 19:56:53.077795  115078 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34735
	I1206 19:56:53.078120  115078 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46235
	I1206 19:56:53.078274  115078 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:56:53.078714  115078 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:56:53.078902  115078 main.go:141] libmachine: Using API Version  1
	I1206 19:56:53.078928  115078 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:56:53.079207  115078 main.go:141] libmachine: Using API Version  1
	I1206 19:56:53.079226  115078 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:56:53.079272  115078 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:56:53.079514  115078 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:56:53.079727  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetState
	I1206 19:56:53.079865  115078 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:56:53.079899  115078 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:56:53.083670  115078 addons.go:231] Setting addon default-storageclass=true in "no-preload-989559"
	W1206 19:56:53.083695  115078 addons.go:240] addon default-storageclass should already be in state true
	I1206 19:56:53.083724  115078 host.go:66] Checking if "no-preload-989559" exists ...
	I1206 19:56:53.084178  115078 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:56:53.084230  115078 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:56:53.097845  115078 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36625
	I1206 19:56:53.098357  115078 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:56:53.099058  115078 main.go:141] libmachine: Using API Version  1
	I1206 19:56:53.099080  115078 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:56:53.099409  115078 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:56:53.099633  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetState
	I1206 19:56:53.101625  115078 main.go:141] libmachine: (no-preload-989559) Calling .DriverName
	I1206 19:56:53.103641  115078 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1206 19:56:53.105081  115078 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44431
	I1206 19:56:53.105105  115078 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1206 19:56:53.105123  115078 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1206 19:56:53.105150  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHHostname
	I1206 19:56:53.104327  115078 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34423
	I1206 19:56:53.105556  115078 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:56:53.105777  115078 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:56:53.105983  115078 main.go:141] libmachine: Using API Version  1
	I1206 19:56:53.105998  115078 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:56:53.106312  115078 main.go:141] libmachine: Using API Version  1
	I1206 19:56:53.106328  115078 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:56:53.106619  115078 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:56:53.106910  115078 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:56:53.107192  115078 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:56:53.107229  115078 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:56:53.107338  115078 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:56:53.107398  115078 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:56:53.108297  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:53.108969  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:53.108999  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:53.109150  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHPort
	I1206 19:56:53.109436  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:53.109586  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHUsername
	I1206 19:56:53.109725  115078 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/no-preload-989559/id_rsa Username:docker}
	I1206 19:56:53.123985  115078 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46161
	I1206 19:56:53.124496  115078 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:56:53.125052  115078 main.go:141] libmachine: Using API Version  1
	I1206 19:56:53.125078  115078 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:56:53.125325  115078 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36509
	I1206 19:56:53.125570  115078 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:56:53.125785  115078 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:56:53.125826  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetState
	I1206 19:56:53.126385  115078 main.go:141] libmachine: Using API Version  1
	I1206 19:56:53.126413  115078 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:56:53.126875  115078 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:56:53.127050  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetState
	I1206 19:56:53.127923  115078 main.go:141] libmachine: (no-preload-989559) Calling .DriverName
	I1206 19:56:53.128212  115078 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 19:56:53.128226  115078 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 19:56:53.128242  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHHostname
	I1206 19:56:53.128747  115078 main.go:141] libmachine: (no-preload-989559) Calling .DriverName
	I1206 19:56:53.131043  115078 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 19:56:53.131487  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:53.132638  115078 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 19:56:53.132645  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:53.132651  115078 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 19:56:53.132667  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHHostname
	I1206 19:56:53.132682  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:53.132132  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHPort
	I1206 19:56:53.133425  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:53.133636  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHUsername
	I1206 19:56:53.133870  115078 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/no-preload-989559/id_rsa Username:docker}
	I1206 19:56:53.136039  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:53.136583  115078 main.go:141] libmachine: (no-preload-989559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:ce", ip: ""} in network mk-no-preload-989559: {Iface:virbr2 ExpiryTime:2023-12-06 20:56:00 +0000 UTC Type:0 Mac:52:54:00:1c:4b:ce Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:no-preload-989559 Clientid:01:52:54:00:1c:4b:ce}
	I1206 19:56:53.136612  115078 main.go:141] libmachine: (no-preload-989559) DBG | domain no-preload-989559 has defined IP address 192.168.39.5 and MAC address 52:54:00:1c:4b:ce in network mk-no-preload-989559
	I1206 19:56:53.136850  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHPort
	I1206 19:56:53.137087  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHKeyPath
	I1206 19:56:53.137390  115078 main.go:141] libmachine: (no-preload-989559) Calling .GetSSHUsername
	I1206 19:56:53.137583  115078 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/no-preload-989559/id_rsa Username:docker}
	I1206 19:56:53.247726  115078 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1206 19:56:53.247751  115078 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1206 19:56:53.271421  115078 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 19:56:53.296149  115078 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1206 19:56:53.296181  115078 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1206 19:56:53.303580  115078 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 19:56:53.350607  115078 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1206 19:56:53.350607  115078 node_ready.go:35] waiting up to 6m0s for node "no-preload-989559" to be "Ready" ...
	I1206 19:56:53.355315  115078 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 19:56:53.355336  115078 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1206 19:56:53.392730  115078 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 19:56:53.624768  115078 main.go:141] libmachine: Making call to close driver server
	I1206 19:56:53.624798  115078 main.go:141] libmachine: (no-preload-989559) Calling .Close
	I1206 19:56:53.625224  115078 main.go:141] libmachine: Successfully made call to close driver server
	I1206 19:56:53.625330  115078 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 19:56:53.625353  115078 main.go:141] libmachine: Making call to close driver server
	I1206 19:56:53.625393  115078 main.go:141] libmachine: (no-preload-989559) Calling .Close
	I1206 19:56:53.625227  115078 main.go:141] libmachine: (no-preload-989559) DBG | Closing plugin on server side
	I1206 19:56:53.625849  115078 main.go:141] libmachine: Successfully made call to close driver server
	I1206 19:56:53.625874  115078 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 19:56:53.632671  115078 main.go:141] libmachine: Making call to close driver server
	I1206 19:56:53.632691  115078 main.go:141] libmachine: (no-preload-989559) Calling .Close
	I1206 19:56:53.632983  115078 main.go:141] libmachine: Successfully made call to close driver server
	I1206 19:56:53.633005  115078 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 19:56:54.433395  115078 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.12977215s)
	I1206 19:56:54.433462  115078 main.go:141] libmachine: Making call to close driver server
	I1206 19:56:54.433491  115078 main.go:141] libmachine: (no-preload-989559) Calling .Close
	I1206 19:56:54.433360  115078 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.040565961s)
	I1206 19:56:54.433546  115078 main.go:141] libmachine: Making call to close driver server
	I1206 19:56:54.433567  115078 main.go:141] libmachine: (no-preload-989559) Calling .Close
	I1206 19:56:54.433833  115078 main.go:141] libmachine: Successfully made call to close driver server
	I1206 19:56:54.433854  115078 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 19:56:54.433863  115078 main.go:141] libmachine: Making call to close driver server
	I1206 19:56:54.433867  115078 main.go:141] libmachine: (no-preload-989559) DBG | Closing plugin on server side
	I1206 19:56:54.433871  115078 main.go:141] libmachine: (no-preload-989559) Calling .Close
	I1206 19:56:54.433842  115078 main.go:141] libmachine: (no-preload-989559) DBG | Closing plugin on server side
	I1206 19:56:54.433908  115078 main.go:141] libmachine: Successfully made call to close driver server
	I1206 19:56:54.433926  115078 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 19:56:54.433939  115078 main.go:141] libmachine: Making call to close driver server
	I1206 19:56:54.433951  115078 main.go:141] libmachine: (no-preload-989559) Calling .Close
	I1206 19:56:54.434124  115078 main.go:141] libmachine: Successfully made call to close driver server
	I1206 19:56:54.434148  115078 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 19:56:54.434153  115078 main.go:141] libmachine: (no-preload-989559) DBG | Closing plugin on server side
	I1206 19:56:54.434199  115078 main.go:141] libmachine: (no-preload-989559) DBG | Closing plugin on server side
	I1206 19:56:54.434212  115078 main.go:141] libmachine: Successfully made call to close driver server
	I1206 19:56:54.434224  115078 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 19:56:54.434240  115078 addons.go:467] Verifying addon metrics-server=true in "no-preload-989559"
	I1206 19:56:54.437357  115078 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1206 19:56:50.289141  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:52.289568  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:54.438928  115078 addons.go:502] enable addons completed in 1.378684523s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1206 19:56:55.439812  115078 node_ready.go:58] node "no-preload-989559" has status "Ready":"False"
	I1206 19:56:52.174520  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:54.175288  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:54.492713  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:56.493106  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:54.789039  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:57.288485  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:59.289450  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:57.931320  115078 node_ready.go:58] node "no-preload-989559" has status "Ready":"False"
	I1206 19:57:00.430485  115078 node_ready.go:49] node "no-preload-989559" has status "Ready":"True"
	I1206 19:57:00.430517  115078 node_ready.go:38] duration metric: took 7.079875254s waiting for node "no-preload-989559" to be "Ready" ...
	I1206 19:57:00.430530  115078 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 19:57:00.436772  115078 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-h9pkz" in "kube-system" namespace to be "Ready" ...
	I1206 19:57:00.442667  115078 pod_ready.go:92] pod "coredns-76f75df574-h9pkz" in "kube-system" namespace has status "Ready":"True"
	I1206 19:57:00.442688  115078 pod_ready.go:81] duration metric: took 5.884841ms waiting for pod "coredns-76f75df574-h9pkz" in "kube-system" namespace to be "Ready" ...
	I1206 19:57:00.442701  115078 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	I1206 19:56:56.671845  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:59.172634  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:01.175416  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:56:58.991760  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:00.992295  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:01.787443  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:03.787988  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:02.468096  115078 pod_ready.go:102] pod "etcd-no-preload-989559" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:04.965881  115078 pod_ready.go:92] pod "etcd-no-preload-989559" in "kube-system" namespace has status "Ready":"True"
	I1206 19:57:04.965905  115078 pod_ready.go:81] duration metric: took 4.523195911s waiting for pod "etcd-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	I1206 19:57:04.965916  115078 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	I1206 19:57:04.971414  115078 pod_ready.go:92] pod "kube-apiserver-no-preload-989559" in "kube-system" namespace has status "Ready":"True"
	I1206 19:57:04.971433  115078 pod_ready.go:81] duration metric: took 5.510214ms waiting for pod "kube-apiserver-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	I1206 19:57:04.971441  115078 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	I1206 19:57:04.977851  115078 pod_ready.go:92] pod "kube-controller-manager-no-preload-989559" in "kube-system" namespace has status "Ready":"True"
	I1206 19:57:04.977870  115078 pod_ready.go:81] duration metric: took 6.422623ms waiting for pod "kube-controller-manager-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	I1206 19:57:04.977878  115078 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zgqvt" in "kube-system" namespace to be "Ready" ...
	I1206 19:57:04.985189  115078 pod_ready.go:92] pod "kube-proxy-zgqvt" in "kube-system" namespace has status "Ready":"True"
	I1206 19:57:04.985215  115078 pod_ready.go:81] duration metric: took 7.330713ms waiting for pod "kube-proxy-zgqvt" in "kube-system" namespace to be "Ready" ...
	I1206 19:57:04.985224  115078 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	I1206 19:57:05.230810  115078 pod_ready.go:92] pod "kube-scheduler-no-preload-989559" in "kube-system" namespace has status "Ready":"True"
	I1206 19:57:05.230835  115078 pod_ready.go:81] duration metric: took 245.59313ms waiting for pod "kube-scheduler-no-preload-989559" in "kube-system" namespace to be "Ready" ...
	I1206 19:57:05.230845  115078 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace to be "Ready" ...
	I1206 19:57:03.189551  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:05.673064  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:03.491815  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:05.991689  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:07.992156  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:05.789026  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:07.789964  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:07.538620  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:10.040533  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:08.171042  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:10.671754  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:10.490556  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:12.491886  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:10.287716  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:12.788212  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:12.538291  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:14.541614  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:12.672138  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:15.171421  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:14.992060  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:17.502730  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:14.788301  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:17.287038  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:19.288646  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:17.038893  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:19.543137  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:17.671258  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:20.170885  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:19.991949  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:22.491591  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:21.787339  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:23.788729  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:22.041590  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:24.540137  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:22.171069  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:24.670919  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:24.992198  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:27.492171  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:26.290524  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:28.787761  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:27.039132  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:29.542736  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:27.170762  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:29.171345  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:29.992006  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:32.490556  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:31.288189  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:33.787785  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:32.039418  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:34.039727  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:31.670563  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:34.170705  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:36.171236  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:34.492161  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:36.492522  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:35.788140  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:37.788283  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:36.540765  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:39.038645  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:38.171622  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:40.670580  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:38.990433  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:40.990810  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:42.992228  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:40.287403  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:42.287578  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:44.287701  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:41.039767  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:43.539800  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:45.543374  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:43.173769  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:45.670574  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:44.995625  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:47.492316  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:46.289397  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:48.787659  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:48.038286  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:50.039013  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:48.176705  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:50.670177  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:49.991919  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:52.491478  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:50.788175  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:53.288824  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:52.040785  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:54.538521  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:53.173256  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:55.670940  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:54.492526  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:56.493207  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:55.787745  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:57.788237  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:56.539097  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:59.039024  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:58.174463  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:00.674095  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:58.990652  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:00.993255  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:57:59.788454  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:02.287774  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:04.288180  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:01.042813  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:03.541670  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:03.171100  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:05.673480  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:03.492375  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:05.991094  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:07.992159  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:06.288916  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:08.289817  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:06.038556  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:08.038962  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:10.539560  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:08.171785  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:10.671152  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:09.993042  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:12.491776  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:10.790823  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:12.791724  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:12.540234  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:14.542433  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:12.672062  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:15.170654  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:14.993921  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:17.492163  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:15.289223  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:17.787808  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:17.038754  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:19.039749  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:17.171210  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:19.670633  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:19.991157  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:21.991531  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:19.788614  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:22.288567  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:21.040007  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:23.047504  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:25.539859  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:21.671920  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:24.173543  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:23.993354  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:26.491975  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:24.789151  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:26.789703  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:29.287981  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:28.038595  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:30.039044  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:26.670809  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:29.171281  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:28.492552  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:30.990797  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:32.991467  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:31.289190  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:33.788860  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:32.046392  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:34.538829  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:31.671784  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:33.672095  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:36.171077  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:34.992478  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:37.492021  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:35.789666  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:38.287860  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:37.038795  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:39.537643  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:38.670088  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:41.171066  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:39.991754  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:41.994379  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:40.288183  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:42.788826  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:41.539212  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:43.543524  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:43.674139  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:46.170213  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:44.491092  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:46.491632  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:45.287473  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:47.288157  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:49.289525  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:46.038254  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:48.039117  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:50.039290  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:48.170319  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:50.671091  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:48.492359  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:50.992132  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:51.787368  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:53.788448  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:52.039474  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:54.540427  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:53.169921  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:55.171727  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:53.492764  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:55.993038  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:56.287644  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:58.288171  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:57.038915  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:59.039626  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:57.671011  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:59.671928  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:58:58.491565  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:00.492398  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:02.994198  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:00.788591  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:02.789729  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:01.540414  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:03.547448  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:02.172546  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:04.670363  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:05.492399  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:07.991600  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:05.287805  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:07.289128  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:06.039393  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:08.040259  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:10.541882  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:06.670653  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:09.172460  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:10.491981  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:12.991797  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:09.788064  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:12.291318  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:12.544283  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:15.040829  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:11.673737  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:14.172972  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:14.992556  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:17.492610  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:14.788287  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:16.789265  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:19.287925  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:17.542363  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:20.039068  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:16.674724  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:18.675236  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:21.170028  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:19.493199  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:21.992164  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:21.288023  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:23.289315  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:22.539662  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:25.038813  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:23.170153  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:25.172299  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:24.491811  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:26.492671  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:25.788309  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:27.791911  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:27.539832  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:29.540277  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:27.671148  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:30.171591  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:28.990920  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:30.992085  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:32.992394  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:30.288522  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:32.288574  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:31.542448  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:34.039116  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:32.671751  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:35.169968  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:35.492708  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:37.992344  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:34.787925  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:36.788270  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:38.788369  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:36.539113  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:39.040215  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:37.171340  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:39.171482  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:40.491091  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:42.491915  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:40.789138  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:43.287352  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:41.538818  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:43.539787  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:41.670936  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:43.671019  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:45.671158  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:44.992666  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:47.491581  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:45.287493  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:47.787403  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:46.039500  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:48.538497  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:50.539750  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:48.171563  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:50.673901  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:49.991083  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:51.991943  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:49.788072  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:51.788139  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:53.788885  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:53.039532  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:55.539183  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:53.177102  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:55.670778  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:53.992408  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:56.492592  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:56.288587  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:58.288722  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:57.539766  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:00.038890  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:58.171948  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:00.173211  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 19:59:58.492926  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:00.992517  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:02.992971  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:00.291465  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:02.292084  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:02.039986  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:04.541022  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:02.674513  115497 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:04.407290  115497 pod_ready.go:81] duration metric: took 4m0.000215571s waiting for pod "metrics-server-57f55c9bc5-7bblg" in "kube-system" namespace to be "Ready" ...
	E1206 20:00:04.407325  115497 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1206 20:00:04.407343  115497 pod_ready.go:38] duration metric: took 4m12.62023597s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 20:00:04.407376  115497 kubeadm.go:640] restartCluster took 4m33.115368763s
	W1206 20:00:04.407460  115497 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1206 20:00:04.407558  115497 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1206 20:00:05.492129  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:07.493228  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:04.788290  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:06.789396  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:08.789507  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:06.541064  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:09.040499  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:09.992817  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:12.492671  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:11.288813  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:13.788228  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:11.540420  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:13.540837  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:14.492803  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:16.991852  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:18.762771  115497 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.35517444s)
	I1206 20:00:18.762878  115497 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 20:00:18.777691  115497 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 20:00:18.788508  115497 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 20:00:18.798417  115497 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 20:00:18.798483  115497 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1206 20:00:18.858377  115497 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1206 20:00:18.858486  115497 kubeadm.go:322] [preflight] Running pre-flight checks
	I1206 20:00:19.020664  115497 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 20:00:19.020845  115497 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 20:00:19.020979  115497 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1206 20:00:19.294254  115497 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 20:00:15.788560  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:18.288173  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:19.296186  115497 out.go:204]   - Generating certificates and keys ...
	I1206 20:00:19.296294  115497 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1206 20:00:19.296394  115497 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1206 20:00:19.296512  115497 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 20:00:19.296601  115497 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1206 20:00:19.296712  115497 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 20:00:19.296779  115497 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1206 20:00:19.296938  115497 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1206 20:00:19.297044  115497 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1206 20:00:19.297141  115497 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 20:00:19.297228  115497 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 20:00:19.297296  115497 kubeadm.go:322] [certs] Using the existing "sa" key
	I1206 20:00:19.297374  115497 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 20:00:19.401712  115497 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 20:00:19.667664  115497 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 20:00:19.977926  115497 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 20:00:20.161984  115497 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 20:00:20.162704  115497 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 20:00:20.165273  115497 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 20:00:16.040687  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:18.540495  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:20.167168  115497 out.go:204]   - Booting up control plane ...
	I1206 20:00:20.167327  115497 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 20:00:20.167488  115497 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 20:00:20.167596  115497 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 20:00:20.186839  115497 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 20:00:20.187950  115497 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 20:00:20.188122  115497 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1206 20:00:20.329099  115497 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1206 20:00:18.991946  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:21.490687  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:20.290780  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:22.293161  115591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:23.270450  115591 pod_ready.go:81] duration metric: took 4m0.000401122s waiting for pod "metrics-server-57f55c9bc5-dr9k8" in "kube-system" namespace to be "Ready" ...
	E1206 20:00:23.270504  115591 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1206 20:00:23.270527  115591 pod_ready.go:38] duration metric: took 4m9.100871827s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 20:00:23.270576  115591 kubeadm.go:640] restartCluster took 4m28.999844958s
	W1206 20:00:23.270666  115591 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1206 20:00:23.270705  115591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1206 20:00:21.040410  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:23.041625  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:25.044168  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:23.492875  115217 pod_ready.go:102] pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:24.689131  115217 pod_ready.go:81] duration metric: took 4m0.000750192s waiting for pod "metrics-server-74d5856cc6-jg2s7" in "kube-system" namespace to be "Ready" ...
	E1206 20:00:24.689173  115217 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1206 20:00:24.689203  115217 pod_ready.go:38] duration metric: took 4m1.202987977s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 20:00:24.689247  115217 kubeadm.go:640] restartCluster took 5m10.459408033s
	W1206 20:00:24.689356  115217 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1206 20:00:24.689392  115217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1206 20:00:29.334312  115497 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.004152 seconds
	I1206 20:00:29.334473  115497 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 20:00:29.360390  115497 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 20:00:29.898911  115497 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 20:00:29.899167  115497 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-380424 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1206 20:00:30.416589  115497 kubeadm.go:322] [bootstrap-token] Using token: gsw79m.btql0t11yc11efah
	I1206 20:00:30.418388  115497 out.go:204]   - Configuring RBAC rules ...
	I1206 20:00:30.418538  115497 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 20:00:30.424651  115497 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1206 20:00:30.439637  115497 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 20:00:30.443854  115497 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 20:00:30.448439  115497 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 20:00:30.454084  115497 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 20:00:30.473340  115497 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1206 20:00:30.748803  115497 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1206 20:00:30.835721  115497 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1206 20:00:30.837289  115497 kubeadm.go:322] 
	I1206 20:00:30.837362  115497 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1206 20:00:30.837381  115497 kubeadm.go:322] 
	I1206 20:00:30.837449  115497 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1206 20:00:30.837457  115497 kubeadm.go:322] 
	I1206 20:00:30.837485  115497 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1206 20:00:30.837597  115497 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 20:00:30.837675  115497 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 20:00:30.837684  115497 kubeadm.go:322] 
	I1206 20:00:30.837760  115497 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1206 20:00:30.837770  115497 kubeadm.go:322] 
	I1206 20:00:30.837826  115497 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1206 20:00:30.837833  115497 kubeadm.go:322] 
	I1206 20:00:30.837899  115497 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1206 20:00:30.838016  115497 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 20:00:30.838114  115497 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 20:00:30.838124  115497 kubeadm.go:322] 
	I1206 20:00:30.838224  115497 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1206 20:00:30.838316  115497 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1206 20:00:30.838333  115497 kubeadm.go:322] 
	I1206 20:00:30.838409  115497 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token gsw79m.btql0t11yc11efah \
	I1206 20:00:30.838522  115497 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3173d0205a58c67077c3594ee458dc14bc41fcece32682cbfd9ea1126e12b817 \
	I1206 20:00:30.838559  115497 kubeadm.go:322] 	--control-plane 
	I1206 20:00:30.838568  115497 kubeadm.go:322] 
	I1206 20:00:30.838686  115497 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1206 20:00:30.838699  115497 kubeadm.go:322] 
	I1206 20:00:30.838805  115497 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token gsw79m.btql0t11yc11efah \
	I1206 20:00:30.838952  115497 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3173d0205a58c67077c3594ee458dc14bc41fcece32682cbfd9ea1126e12b817 
	I1206 20:00:30.839686  115497 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 20:00:30.839714  115497 cni.go:84] Creating CNI manager for ""
	I1206 20:00:30.839727  115497 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 20:00:30.841824  115497 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1206 20:00:27.540848  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:30.038457  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:30.843246  115497 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1206 20:00:30.916583  115497 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1206 20:00:30.974088  115497 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 20:00:30.974183  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:30.974183  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=31a3600ce72029d920a55140bbc6d0705e357503 minikube.k8s.io/name=default-k8s-diff-port-380424 minikube.k8s.io/updated_at=2023_12_06T20_00_30_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:31.400910  115497 ops.go:34] apiserver oom_adj: -16
	I1206 20:00:31.401056  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:31.320362  115217 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (6.630947418s)
	I1206 20:00:31.320445  115217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 20:00:31.349765  115217 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 20:00:31.369412  115217 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 20:00:31.381350  115217 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 20:00:31.381410  115217 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1206 20:00:31.626397  115217 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 20:00:32.039425  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:34.041934  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:31.516285  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:32.139221  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:32.639059  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:33.139995  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:33.639038  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:34.139842  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:34.640037  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:35.139893  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:35.639961  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:36.139749  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:38.383787  115591 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (15.113041618s)
	I1206 20:00:38.383859  115591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 20:00:38.397718  115591 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 20:00:38.406748  115591 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 20:00:38.415574  115591 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 20:00:38.415633  115591 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1206 20:00:38.485595  115591 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1206 20:00:38.485781  115591 kubeadm.go:322] [preflight] Running pre-flight checks
	I1206 20:00:38.659892  115591 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 20:00:38.660073  115591 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 20:00:38.660209  115591 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1206 20:00:38.939756  115591 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 20:00:38.941971  115591 out.go:204]   - Generating certificates and keys ...
	I1206 20:00:38.942103  115591 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1206 20:00:38.942200  115591 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1206 20:00:38.942296  115591 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 20:00:38.942708  115591 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1206 20:00:38.943817  115591 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 20:00:38.944130  115591 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1206 20:00:38.944894  115591 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1206 20:00:38.945607  115591 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1206 20:00:38.946355  115591 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 20:00:38.947015  115591 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 20:00:38.947720  115591 kubeadm.go:322] [certs] Using the existing "sa" key
	I1206 20:00:38.947795  115591 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 20:00:39.140045  115591 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 20:00:39.300047  115591 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 20:00:39.418439  115591 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 20:00:40.060938  115591 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 20:00:40.061616  115591 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 20:00:40.064208  115591 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 20:00:36.042049  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:38.540429  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:36.639372  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:37.139213  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:37.639506  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:38.139159  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:38.639007  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:39.139972  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:39.639969  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:40.139910  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:40.639836  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:41.139009  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:41.639153  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:42.139055  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:42.639853  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:43.139934  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:43.639741  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:44.139776  115497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:44.279581  115497 kubeadm.go:1088] duration metric: took 13.305461955s to wait for elevateKubeSystemPrivileges.
	I1206 20:00:44.279625  115497 kubeadm.go:406] StartCluster complete in 5m13.04588426s
	I1206 20:00:44.279660  115497 settings.go:142] acquiring lock: {Name:mkfeb988d43ca5824ac2b3af603600358ae0dd6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 20:00:44.279765  115497 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17740-63652/kubeconfig
	I1206 20:00:44.282748  115497 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-63652/kubeconfig: {Name:mkb891a2b2c86b4a1b0f4bb8fd4e51233eb9c683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 20:00:44.285263  115497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 20:00:44.285351  115497 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1206 20:00:44.285434  115497 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-380424"
	I1206 20:00:44.285459  115497 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-380424"
	W1206 20:00:44.285471  115497 addons.go:240] addon storage-provisioner should already be in state true
	I1206 20:00:44.285478  115497 config.go:182] Loaded profile config "default-k8s-diff-port-380424": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 20:00:44.285531  115497 host.go:66] Checking if "default-k8s-diff-port-380424" exists ...
	I1206 20:00:44.285542  115497 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-380424"
	I1206 20:00:44.285561  115497 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-380424"
	I1206 20:00:44.285719  115497 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-380424"
	I1206 20:00:44.285738  115497 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-380424"
	W1206 20:00:44.285747  115497 addons.go:240] addon metrics-server should already be in state true
	I1206 20:00:44.285797  115497 host.go:66] Checking if "default-k8s-diff-port-380424" exists ...
	I1206 20:00:44.285998  115497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:00:44.285998  115497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:00:44.286023  115497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:00:44.286026  115497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:00:44.286167  115497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:00:44.286190  115497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:00:44.306223  115497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41495
	I1206 20:00:44.306441  115497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39661
	I1206 20:00:44.307505  115497 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:00:44.307637  115497 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:00:44.308463  115497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41881
	I1206 20:00:44.308651  115497 main.go:141] libmachine: Using API Version  1
	I1206 20:00:44.308672  115497 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:00:44.309154  115497 main.go:141] libmachine: Using API Version  1
	I1206 20:00:44.309173  115497 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:00:44.309295  115497 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:00:44.309539  115497 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:00:44.310150  115497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:00:44.310183  115497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:00:44.310431  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetState
	I1206 20:00:44.312432  115497 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:00:44.313004  115497 main.go:141] libmachine: Using API Version  1
	I1206 20:00:44.313020  115497 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:00:44.315047  115497 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-380424"
	W1206 20:00:44.315065  115497 addons.go:240] addon default-storageclass should already be in state true
	I1206 20:00:44.315094  115497 host.go:66] Checking if "default-k8s-diff-port-380424" exists ...
	I1206 20:00:44.315499  115497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:00:44.315523  115497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:00:44.316248  115497 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:00:44.316893  115497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:00:44.316920  115497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:00:44.335555  115497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43199
	I1206 20:00:44.335908  115497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45127
	I1206 20:00:44.336636  115497 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:00:44.336749  115497 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:00:44.337379  115497 main.go:141] libmachine: Using API Version  1
	I1206 20:00:44.337404  115497 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:00:44.337791  115497 main.go:141] libmachine: Using API Version  1
	I1206 20:00:44.337818  115497 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:00:44.337895  115497 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:00:44.338474  115497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:00:44.338502  115497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:00:44.338944  115497 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-380424" context rescaled to 1 replicas
	I1206 20:00:44.338979  115497 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.22 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 20:00:44.340731  115497 out.go:177] * Verifying Kubernetes components...
	I1206 20:00:44.339696  115497 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:00:44.342367  115497 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 20:00:44.342537  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetState
	I1206 20:00:44.348774  115497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35461
	I1206 20:00:44.348808  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .DriverName
	I1206 20:00:44.350935  115497 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1206 20:00:44.349433  115497 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:00:44.353022  115497 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1206 20:00:44.353036  115497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1206 20:00:44.353060  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHHostname
	I1206 20:00:44.353493  115497 main.go:141] libmachine: Using API Version  1
	I1206 20:00:44.353512  115497 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:00:44.354850  115497 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:00:44.355732  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetState
	I1206 20:00:44.356894  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 20:00:44.359438  115497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38795
	I1206 20:00:44.360009  115497 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:00:44.360502  115497 main.go:141] libmachine: Using API Version  1
	I1206 20:00:44.360525  115497 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:00:44.360899  115497 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:00:44.361092  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetState
	I1206 20:00:44.362575  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 20:00:44.362605  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 20:00:44.362663  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHPort
	I1206 20:00:44.363067  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 20:00:44.363259  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHUsername
	I1206 20:00:44.363310  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .DriverName
	I1206 20:00:44.363544  115497 sshutil.go:53] new ssh client: &{IP:192.168.72.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/default-k8s-diff-port-380424/id_rsa Username:docker}
	I1206 20:00:44.363628  115497 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 20:00:44.363643  115497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 20:00:44.363663  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHHostname
	I1206 20:00:44.365352  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .DriverName
	I1206 20:00:44.367261  115497 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 20:00:40.066048  115591 out.go:204]   - Booting up control plane ...
	I1206 20:00:40.066207  115591 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 20:00:40.066320  115591 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 20:00:40.069077  115591 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 20:00:40.086558  115591 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 20:00:40.087856  115591 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 20:00:40.087969  115591 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1206 20:00:40.224157  115591 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1206 20:00:45.313051  115217 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1206 20:00:45.313125  115217 kubeadm.go:322] [preflight] Running pre-flight checks
	I1206 20:00:45.313226  115217 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 20:00:45.313355  115217 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 20:00:45.313466  115217 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1206 20:00:45.313591  115217 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 20:00:45.313697  115217 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 20:00:45.313767  115217 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1206 20:00:45.313844  115217 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 20:00:45.315759  115217 out.go:204]   - Generating certificates and keys ...
	I1206 20:00:45.315876  115217 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1206 20:00:45.315980  115217 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1206 20:00:45.316085  115217 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 20:00:45.316158  115217 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1206 20:00:45.316252  115217 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 20:00:45.316320  115217 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1206 20:00:45.316420  115217 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1206 20:00:45.316505  115217 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1206 20:00:45.316608  115217 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 20:00:45.316707  115217 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 20:00:45.316761  115217 kubeadm.go:322] [certs] Using the existing "sa" key
	I1206 20:00:45.316838  115217 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 20:00:45.316909  115217 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 20:00:45.316982  115217 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 20:00:45.317068  115217 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 20:00:45.317136  115217 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 20:00:45.317221  115217 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 20:00:45.318915  115217 out.go:204]   - Booting up control plane ...
	I1206 20:00:45.319042  115217 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 20:00:45.319145  115217 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 20:00:45.319253  115217 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 20:00:45.319367  115217 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 20:00:45.319568  115217 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1206 20:00:45.319690  115217 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.504419 seconds
	I1206 20:00:45.319828  115217 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 20:00:45.319978  115217 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 20:00:45.320042  115217 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 20:00:45.320189  115217 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-448851 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1206 20:00:45.320255  115217 kubeadm.go:322] [bootstrap-token] Using token: ms33mw.f0m2wm1rokle0nnu
	I1206 20:00:45.321976  115217 out.go:204]   - Configuring RBAC rules ...
	I1206 20:00:45.322105  115217 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 20:00:45.322229  115217 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 20:00:45.322373  115217 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 20:00:45.322532  115217 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 20:00:45.322673  115217 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 20:00:45.322759  115217 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1206 20:00:45.322845  115217 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1206 20:00:45.322857  115217 kubeadm.go:322] 
	I1206 20:00:45.322936  115217 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1206 20:00:45.322945  115217 kubeadm.go:322] 
	I1206 20:00:45.323055  115217 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1206 20:00:45.323071  115217 kubeadm.go:322] 
	I1206 20:00:45.323105  115217 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1206 20:00:45.323196  115217 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 20:00:45.323270  115217 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 20:00:45.323282  115217 kubeadm.go:322] 
	I1206 20:00:45.323373  115217 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1206 20:00:45.323477  115217 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 20:00:45.323575  115217 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 20:00:45.323590  115217 kubeadm.go:322] 
	I1206 20:00:45.323736  115217 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1206 20:00:45.323840  115217 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1206 20:00:45.323855  115217 kubeadm.go:322] 
	I1206 20:00:45.323984  115217 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ms33mw.f0m2wm1rokle0nnu \
	I1206 20:00:45.324187  115217 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:3173d0205a58c67077c3594ee458dc14bc41fcece32682cbfd9ea1126e12b817 \
	I1206 20:00:45.324248  115217 kubeadm.go:322]     --control-plane 	  
	I1206 20:00:45.324266  115217 kubeadm.go:322] 
	I1206 20:00:45.324386  115217 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1206 20:00:45.324397  115217 kubeadm.go:322] 
	I1206 20:00:45.324501  115217 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ms33mw.f0m2wm1rokle0nnu \
	I1206 20:00:45.324651  115217 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:3173d0205a58c67077c3594ee458dc14bc41fcece32682cbfd9ea1126e12b817 
	I1206 20:00:45.324664  115217 cni.go:84] Creating CNI manager for ""
	I1206 20:00:45.324675  115217 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 20:00:45.327284  115217 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1206 20:00:41.039495  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:43.041892  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:45.042744  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:44.369437  115497 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 20:00:44.369449  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 20:00:44.369458  115497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 20:00:44.369482  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHHostname
	I1206 20:00:44.373360  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 20:00:44.373394  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 20:00:44.373415  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 20:00:44.373465  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHPort
	I1206 20:00:44.373538  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:24:2b", ip: ""} in network mk-default-k8s-diff-port-380424: {Iface:virbr1 ExpiryTime:2023-12-06 20:55:17 +0000 UTC Type:0 Mac:52:54:00:15:24:2b Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:default-k8s-diff-port-380424 Clientid:01:52:54:00:15:24:2b}
	I1206 20:00:44.373554  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | domain default-k8s-diff-port-380424 has defined IP address 192.168.72.22 and MAC address 52:54:00:15:24:2b in network mk-default-k8s-diff-port-380424
	I1206 20:00:44.373769  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 20:00:44.373830  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHPort
	I1206 20:00:44.374020  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHKeyPath
	I1206 20:00:44.374077  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHUsername
	I1206 20:00:44.374221  115497 sshutil.go:53] new ssh client: &{IP:192.168.72.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/default-k8s-diff-port-380424/id_rsa Username:docker}
	I1206 20:00:44.374800  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .GetSSHUsername
	I1206 20:00:44.375017  115497 sshutil.go:53] new ssh client: &{IP:192.168.72.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/default-k8s-diff-port-380424/id_rsa Username:docker}
	I1206 20:00:44.528574  115497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 20:00:44.553349  115497 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1206 20:00:44.553382  115497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1206 20:00:44.604100  115497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 20:00:44.605360  115497 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-380424" to be "Ready" ...
	I1206 20:00:44.605799  115497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 20:00:44.610007  115497 node_ready.go:49] node "default-k8s-diff-port-380424" has status "Ready":"True"
	I1206 20:00:44.610039  115497 node_ready.go:38] duration metric: took 4.647914ms waiting for node "default-k8s-diff-port-380424" to be "Ready" ...
	I1206 20:00:44.610052  115497 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 20:00:44.622684  115497 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-x6p7t" in "kube-system" namespace to be "Ready" ...
	I1206 20:00:44.639914  115497 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1206 20:00:44.640005  115497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1206 20:00:44.710284  115497 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 20:00:44.710318  115497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1206 20:00:44.767014  115497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 20:00:46.656182  115497 pod_ready.go:102] pod "coredns-5dd5756b68-x6p7t" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:46.941717  115497 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.413097049s)
	I1206 20:00:46.941764  115497 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.33594011s)
	I1206 20:00:46.941787  115497 start.go:929] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1206 20:00:46.941793  115497 main.go:141] libmachine: Making call to close driver server
	I1206 20:00:46.941733  115497 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.337595925s)
	I1206 20:00:46.941808  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .Close
	I1206 20:00:46.941841  115497 main.go:141] libmachine: Making call to close driver server
	I1206 20:00:46.941863  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .Close
	I1206 20:00:46.942167  115497 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:00:46.942187  115497 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:00:46.942198  115497 main.go:141] libmachine: Making call to close driver server
	I1206 20:00:46.942207  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .Close
	I1206 20:00:46.943997  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | Closing plugin on server side
	I1206 20:00:46.944031  115497 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:00:46.944041  115497 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:00:46.944052  115497 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:00:46.944060  115497 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:00:46.944057  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | Closing plugin on server side
	I1206 20:00:46.944077  115497 main.go:141] libmachine: Making call to close driver server
	I1206 20:00:46.944088  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .Close
	I1206 20:00:46.944363  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | Closing plugin on server side
	I1206 20:00:46.944401  115497 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:00:46.944419  115497 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:00:46.984172  115497 main.go:141] libmachine: Making call to close driver server
	I1206 20:00:46.984206  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .Close
	I1206 20:00:46.984675  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | Closing plugin on server side
	I1206 20:00:46.984714  115497 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:00:46.984733  115497 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:00:47.345448  115497 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.5783821s)
	I1206 20:00:47.345552  115497 main.go:141] libmachine: Making call to close driver server
	I1206 20:00:47.345573  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .Close
	I1206 20:00:47.345987  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | Closing plugin on server side
	I1206 20:00:47.346033  115497 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:00:47.346046  115497 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:00:47.346056  115497 main.go:141] libmachine: Making call to close driver server
	I1206 20:00:47.346088  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) Calling .Close
	I1206 20:00:47.346359  115497 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:00:47.346380  115497 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:00:47.346392  115497 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-380424"
	I1206 20:00:47.346442  115497 main.go:141] libmachine: (default-k8s-diff-port-380424) DBG | Closing plugin on server side
	I1206 20:00:47.348281  115497 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1206 20:00:45.328763  115217 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1206 20:00:45.342986  115217 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1206 20:00:45.373351  115217 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 20:00:45.373503  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:45.373559  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=31a3600ce72029d920a55140bbc6d0705e357503 minikube.k8s.io/name=old-k8s-version-448851 minikube.k8s.io/updated_at=2023_12_06T20_00_45_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:45.701779  115217 ops.go:34] apiserver oom_adj: -16
	I1206 20:00:45.701907  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:45.815705  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:46.445065  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:46.945361  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:47.444737  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:47.945540  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:49.228883  115591 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.004688 seconds
	I1206 20:00:49.229058  115591 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 20:00:49.258512  115591 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 20:00:49.793797  115591 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 20:00:49.794010  115591 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-209025 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1206 20:00:50.315415  115591 kubeadm.go:322] [bootstrap-token] Using token: j4xv0f.htia0y0wrnbqnji6
	I1206 20:00:47.349693  115497 addons.go:502] enable addons completed in 3.064343142s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1206 20:00:48.648085  115497 pod_ready.go:92] pod "coredns-5dd5756b68-x6p7t" in "kube-system" namespace has status "Ready":"True"
	I1206 20:00:48.648116  115497 pod_ready.go:81] duration metric: took 4.025396521s waiting for pod "coredns-5dd5756b68-x6p7t" in "kube-system" namespace to be "Ready" ...
	I1206 20:00:48.648132  115497 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 20:00:48.660202  115497 pod_ready.go:92] pod "etcd-default-k8s-diff-port-380424" in "kube-system" namespace has status "Ready":"True"
	I1206 20:00:48.660235  115497 pod_ready.go:81] duration metric: took 12.09317ms waiting for pod "etcd-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 20:00:48.660248  115497 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 20:00:48.666568  115497 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-380424" in "kube-system" namespace has status "Ready":"True"
	I1206 20:00:48.666666  115497 pod_ready.go:81] duration metric: took 6.407781ms waiting for pod "kube-apiserver-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 20:00:48.666694  115497 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 20:00:48.679566  115497 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-380424" in "kube-system" namespace has status "Ready":"True"
	I1206 20:00:48.679653  115497 pod_ready.go:81] duration metric: took 12.938485ms waiting for pod "kube-controller-manager-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 20:00:48.679675  115497 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-khh5n" in "kube-system" namespace to be "Ready" ...
	I1206 20:00:49.554241  115497 pod_ready.go:92] pod "kube-proxy-khh5n" in "kube-system" namespace has status "Ready":"True"
	I1206 20:00:49.554266  115497 pod_ready.go:81] duration metric: took 874.584613ms waiting for pod "kube-proxy-khh5n" in "kube-system" namespace to be "Ready" ...
	I1206 20:00:49.554275  115497 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 20:00:49.845110  115497 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-380424" in "kube-system" namespace has status "Ready":"True"
	I1206 20:00:49.845140  115497 pod_ready.go:81] duration metric: took 290.857787ms waiting for pod "kube-scheduler-default-k8s-diff-port-380424" in "kube-system" namespace to be "Ready" ...
	I1206 20:00:49.845152  115497 pod_ready.go:38] duration metric: took 5.235087469s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 20:00:49.845172  115497 api_server.go:52] waiting for apiserver process to appear ...
	I1206 20:00:49.845251  115497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 20:00:49.861908  115497 api_server.go:72] duration metric: took 5.522870891s to wait for apiserver process to appear ...
	I1206 20:00:49.861943  115497 api_server.go:88] waiting for apiserver healthz status ...
	I1206 20:00:49.861965  115497 api_server.go:253] Checking apiserver healthz at https://192.168.72.22:8444/healthz ...
	I1206 20:00:49.868675  115497 api_server.go:279] https://192.168.72.22:8444/healthz returned 200:
	ok
	I1206 20:00:49.870214  115497 api_server.go:141] control plane version: v1.28.4
	I1206 20:00:49.870254  115497 api_server.go:131] duration metric: took 8.303187ms to wait for apiserver health ...
	I1206 20:00:49.870266  115497 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 20:00:50.047974  115497 system_pods.go:59] 8 kube-system pods found
	I1206 20:00:50.048004  115497 system_pods.go:61] "coredns-5dd5756b68-x6p7t" [de75d299-fede-4fe1-a748-31720acc76eb] Running
	I1206 20:00:50.048011  115497 system_pods.go:61] "etcd-default-k8s-diff-port-380424" [36170db0-a926-4c8d-8283-9af453167ee1] Running
	I1206 20:00:50.048018  115497 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-380424" [72412f12-9e20-4905-89ad-65c67a2e5a7b] Running
	I1206 20:00:50.048025  115497 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-380424" [04d32349-9a28-4270-bd15-2275e74b6713] Running
	I1206 20:00:50.048030  115497 system_pods.go:61] "kube-proxy-khh5n" [acac843d-9849-4bda-af66-2422b319665e] Running
	I1206 20:00:50.048036  115497 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-380424" [a5b9f2ed-8cb1-4912-af86-d231d9b275ba] Running
	I1206 20:00:50.048045  115497 system_pods.go:61] "metrics-server-57f55c9bc5-xpbtp" [280fb2bc-d8d8-4684-8be1-ec0ace47ef77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:00:50.048052  115497 system_pods.go:61] "storage-provisioner" [e1def8b1-c6bb-48df-b2f2-34867a409cb7] Running
	I1206 20:00:50.048063  115497 system_pods.go:74] duration metric: took 177.789423ms to wait for pod list to return data ...
	I1206 20:00:50.048073  115497 default_sa.go:34] waiting for default service account to be created ...
	I1206 20:00:50.246867  115497 default_sa.go:45] found service account: "default"
	I1206 20:00:50.246903  115497 default_sa.go:55] duration metric: took 198.823117ms for default service account to be created ...
	I1206 20:00:50.246914  115497 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 20:00:50.447688  115497 system_pods.go:86] 8 kube-system pods found
	I1206 20:00:50.447777  115497 system_pods.go:89] "coredns-5dd5756b68-x6p7t" [de75d299-fede-4fe1-a748-31720acc76eb] Running
	I1206 20:00:50.447798  115497 system_pods.go:89] "etcd-default-k8s-diff-port-380424" [36170db0-a926-4c8d-8283-9af453167ee1] Running
	I1206 20:00:50.447815  115497 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-380424" [72412f12-9e20-4905-89ad-65c67a2e5a7b] Running
	I1206 20:00:50.447846  115497 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-380424" [04d32349-9a28-4270-bd15-2275e74b6713] Running
	I1206 20:00:50.447870  115497 system_pods.go:89] "kube-proxy-khh5n" [acac843d-9849-4bda-af66-2422b319665e] Running
	I1206 20:00:50.447886  115497 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-380424" [a5b9f2ed-8cb1-4912-af86-d231d9b275ba] Running
	I1206 20:00:50.447904  115497 system_pods.go:89] "metrics-server-57f55c9bc5-xpbtp" [280fb2bc-d8d8-4684-8be1-ec0ace47ef77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:00:50.447920  115497 system_pods.go:89] "storage-provisioner" [e1def8b1-c6bb-48df-b2f2-34867a409cb7] Running
	I1206 20:00:50.447953  115497 system_pods.go:126] duration metric: took 201.030369ms to wait for k8s-apps to be running ...
	I1206 20:00:50.447978  115497 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 20:00:50.448057  115497 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 20:00:50.468801  115497 system_svc.go:56] duration metric: took 20.810606ms WaitForService to wait for kubelet.
	I1206 20:00:50.468837  115497 kubeadm.go:581] duration metric: took 6.129827661s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1206 20:00:50.468860  115497 node_conditions.go:102] verifying NodePressure condition ...
	I1206 20:00:50.646083  115497 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1206 20:00:50.646124  115497 node_conditions.go:123] node cpu capacity is 2
	I1206 20:00:50.646138  115497 node_conditions.go:105] duration metric: took 177.272089ms to run NodePressure ...
	I1206 20:00:50.646153  115497 start.go:228] waiting for startup goroutines ...
	I1206 20:00:50.646164  115497 start.go:233] waiting for cluster config update ...
	I1206 20:00:50.646184  115497 start.go:242] writing updated cluster config ...
	I1206 20:00:50.646551  115497 ssh_runner.go:195] Run: rm -f paused
	I1206 20:00:50.711246  115497 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1206 20:00:50.713989  115497 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-380424" cluster and "default" namespace by default
	I1206 20:00:50.317018  115591 out.go:204]   - Configuring RBAC rules ...
	I1206 20:00:50.317155  115591 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 20:00:50.325410  115591 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1206 20:00:50.335197  115591 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 20:00:50.339351  115591 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 20:00:50.343930  115591 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 20:00:50.352323  115591 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 20:00:50.375514  115591 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1206 20:00:50.703397  115591 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1206 20:00:50.753323  115591 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1206 20:00:50.753351  115591 kubeadm.go:322] 
	I1206 20:00:50.753419  115591 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1206 20:00:50.753430  115591 kubeadm.go:322] 
	I1206 20:00:50.753522  115591 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1206 20:00:50.753539  115591 kubeadm.go:322] 
	I1206 20:00:50.753570  115591 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1206 20:00:50.753642  115591 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 20:00:50.753706  115591 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 20:00:50.753717  115591 kubeadm.go:322] 
	I1206 20:00:50.753780  115591 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1206 20:00:50.753790  115591 kubeadm.go:322] 
	I1206 20:00:50.753847  115591 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1206 20:00:50.753862  115591 kubeadm.go:322] 
	I1206 20:00:50.753928  115591 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1206 20:00:50.754020  115591 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 20:00:50.754109  115591 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 20:00:50.754120  115591 kubeadm.go:322] 
	I1206 20:00:50.754221  115591 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1206 20:00:50.754317  115591 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1206 20:00:50.754327  115591 kubeadm.go:322] 
	I1206 20:00:50.754426  115591 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token j4xv0f.htia0y0wrnbqnji6 \
	I1206 20:00:50.754552  115591 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3173d0205a58c67077c3594ee458dc14bc41fcece32682cbfd9ea1126e12b817 \
	I1206 20:00:50.754583  115591 kubeadm.go:322] 	--control-plane 
	I1206 20:00:50.754593  115591 kubeadm.go:322] 
	I1206 20:00:50.754690  115591 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1206 20:00:50.754707  115591 kubeadm.go:322] 
	I1206 20:00:50.754802  115591 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token j4xv0f.htia0y0wrnbqnji6 \
	I1206 20:00:50.754931  115591 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3173d0205a58c67077c3594ee458dc14bc41fcece32682cbfd9ea1126e12b817 
	I1206 20:00:50.755776  115591 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 20:00:50.755809  115591 cni.go:84] Creating CNI manager for ""
	I1206 20:00:50.755820  115591 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 20:00:50.759045  115591 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1206 20:00:47.539932  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:50.039553  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:48.445172  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:48.944908  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:49.445418  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:49.944612  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:50.445278  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:50.944545  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:51.444775  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:51.945470  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:52.445365  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:52.944742  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:50.760722  115591 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1206 20:00:50.792095  115591 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1206 20:00:50.854264  115591 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 20:00:50.854443  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:50.854549  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=31a3600ce72029d920a55140bbc6d0705e357503 minikube.k8s.io/name=embed-certs-209025 minikube.k8s.io/updated_at=2023_12_06T20_00_50_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:50.894717  115591 ops.go:34] apiserver oom_adj: -16
	I1206 20:00:51.388829  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:51.515185  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:52.132878  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:52.633171  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:53.132766  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:53.632887  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:54.132824  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:52.044531  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:54.538924  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:53.444641  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:53.945468  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:54.444996  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:54.944687  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:55.444757  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:55.945342  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:56.445585  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:56.945489  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:57.445628  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:57.944895  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:54.632961  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:55.132361  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:55.632305  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:56.132439  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:56.632252  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:57.132956  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:57.633210  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:58.133090  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:58.632198  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:59.133167  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:58.445440  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:58.945554  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:59.445298  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:00:59.945574  115217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:01:00.179151  115217 kubeadm.go:1088] duration metric: took 14.805687634s to wait for elevateKubeSystemPrivileges.
	I1206 20:01:00.179185  115217 kubeadm.go:406] StartCluster complete in 5m46.007596294s
	I1206 20:01:00.179204  115217 settings.go:142] acquiring lock: {Name:mkfeb988d43ca5824ac2b3af603600358ae0dd6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 20:01:00.179291  115217 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17740-63652/kubeconfig
	I1206 20:01:00.181490  115217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-63652/kubeconfig: {Name:mkb891a2b2c86b4a1b0f4bb8fd4e51233eb9c683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 20:01:00.181810  115217 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 20:01:00.181933  115217 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1206 20:01:00.182031  115217 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-448851"
	I1206 20:01:00.182063  115217 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-448851"
	W1206 20:01:00.182071  115217 addons.go:240] addon storage-provisioner should already be in state true
	I1206 20:01:00.182126  115217 host.go:66] Checking if "old-k8s-version-448851" exists ...
	I1206 20:01:00.182126  115217 config.go:182] Loaded profile config "old-k8s-version-448851": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1206 20:01:00.182180  115217 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-448851"
	I1206 20:01:00.182198  115217 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-448851"
	I1206 20:01:00.182554  115217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:01:00.182572  115217 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-448851"
	I1206 20:01:00.182581  115217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:01:00.182591  115217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:01:00.182596  115217 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-448851"
	W1206 20:01:00.182606  115217 addons.go:240] addon metrics-server should already be in state true
	I1206 20:01:00.182613  115217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:01:00.182735  115217 host.go:66] Checking if "old-k8s-version-448851" exists ...
	I1206 20:01:00.183101  115217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:01:00.183146  115217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:01:00.201450  115217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38847
	I1206 20:01:00.203683  115217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39291
	I1206 20:01:00.203715  115217 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:01:00.203800  115217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40089
	I1206 20:01:00.204181  115217 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:01:00.204341  115217 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:01:00.204386  115217 main.go:141] libmachine: Using API Version  1
	I1206 20:01:00.204409  115217 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:01:00.204863  115217 main.go:141] libmachine: Using API Version  1
	I1206 20:01:00.204877  115217 main.go:141] libmachine: Using API Version  1
	I1206 20:01:00.204884  115217 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:01:00.204895  115217 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:01:00.204950  115217 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:01:00.205328  115217 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:01:00.205333  115217 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:01:00.205489  115217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:01:00.205520  115217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:01:00.205560  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetState
	I1206 20:01:00.205992  115217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:01:00.206064  115217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:01:00.209487  115217 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-448851"
	W1206 20:01:00.209512  115217 addons.go:240] addon default-storageclass should already be in state true
	I1206 20:01:00.209545  115217 host.go:66] Checking if "old-k8s-version-448851" exists ...
	I1206 20:01:00.209987  115217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:01:00.210033  115217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:01:00.227092  115217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42411
	I1206 20:01:00.227961  115217 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:01:00.228610  115217 main.go:141] libmachine: Using API Version  1
	I1206 20:01:00.228633  115217 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:01:00.229107  115217 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:01:00.229342  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetState
	I1206 20:01:00.230638  115217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42917
	I1206 20:01:00.231552  115217 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:01:00.231863  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .DriverName
	I1206 20:01:00.235076  115217 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 20:01:00.232196  115217 main.go:141] libmachine: Using API Version  1
	I1206 20:01:00.232926  115217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44245
	I1206 20:01:00.237258  115217 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:01:00.237284  115217 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 20:01:00.237310  115217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 20:01:00.237333  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHHostname
	I1206 20:01:00.237682  115217 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:01:00.238034  115217 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:01:00.238212  115217 main.go:141] libmachine: Using API Version  1
	I1206 20:01:00.238240  115217 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:01:00.238580  115217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:01:00.238612  115217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:01:00.238977  115217 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:01:00.239198  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetState
	I1206 20:01:00.240631  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .DriverName
	I1206 20:01:00.243107  115217 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1206 20:01:00.241155  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 20:01:00.241833  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHPort
	I1206 20:01:00.245218  115217 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1206 20:01:00.245244  115217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1206 20:01:00.245267  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHHostname
	I1206 20:01:00.245315  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 20:01:00.245333  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 20:01:00.245505  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 20:01:00.245639  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHUsername
	I1206 20:01:00.245737  115217 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/old-k8s-version-448851/id_rsa Username:docker}
	I1206 20:01:00.248492  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 20:01:00.249278  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 20:01:00.249313  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 20:01:00.249597  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHPort
	I1206 20:01:00.249811  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 20:01:00.249971  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHUsername
	I1206 20:01:00.250090  115217 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/old-k8s-version-448851/id_rsa Username:docker}
	I1206 20:01:00.259179  115217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41691
	I1206 20:01:00.259617  115217 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:01:00.260068  115217 main.go:141] libmachine: Using API Version  1
	I1206 20:01:00.260090  115217 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:01:00.260461  115217 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:01:00.260685  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetState
	I1206 20:01:00.262284  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .DriverName
	I1206 20:01:00.262586  115217 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 20:01:00.262604  115217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 20:01:00.262623  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHHostname
	I1206 20:01:00.265183  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 20:01:00.265643  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:ad:26", ip: ""} in network mk-old-k8s-version-448851: {Iface:virbr4 ExpiryTime:2023-12-06 20:54:55 +0000 UTC Type:0 Mac:52:54:00:91:ad:26 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:old-k8s-version-448851 Clientid:01:52:54:00:91:ad:26}
	I1206 20:01:00.265661  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | domain old-k8s-version-448851 has defined IP address 192.168.61.33 and MAC address 52:54:00:91:ad:26 in network mk-old-k8s-version-448851
	I1206 20:01:00.265890  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHPort
	I1206 20:01:00.266078  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHKeyPath
	I1206 20:01:00.266240  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .GetSSHUsername
	I1206 20:01:00.266941  115217 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/old-k8s-version-448851/id_rsa Username:docker}
	I1206 20:01:00.271403  115217 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-448851" context rescaled to 1 replicas
	I1206 20:01:00.271435  115217 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.33 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 20:01:00.273197  115217 out.go:177] * Verifying Kubernetes components...
	I1206 20:00:57.039307  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:00:59.039639  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:01:00.274454  115217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 20:01:00.597204  115217 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1206 20:01:00.597240  115217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1206 20:01:00.621632  115217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 20:01:00.623444  115217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 20:01:00.630185  115217 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-448851" to be "Ready" ...
	I1206 20:01:00.630280  115217 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 20:01:00.633576  115217 node_ready.go:49] node "old-k8s-version-448851" has status "Ready":"True"
	I1206 20:01:00.633603  115217 node_ready.go:38] duration metric: took 3.385927ms waiting for node "old-k8s-version-448851" to be "Ready" ...
	I1206 20:01:00.633616  115217 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 20:01:00.717216  115217 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1206 20:01:00.717273  115217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1206 20:01:00.735998  115217 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-2nncf" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:00.866186  115217 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 20:01:00.866218  115217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1206 20:01:01.066040  115217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 20:01:01.835164  115217 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.213479825s)
	I1206 20:01:01.835230  115217 main.go:141] libmachine: Making call to close driver server
	I1206 20:01:01.835243  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .Close
	I1206 20:01:01.835558  115217 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:01:01.835605  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | Closing plugin on server side
	I1206 20:01:01.835615  115217 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:01:01.835648  115217 main.go:141] libmachine: Making call to close driver server
	I1206 20:01:01.835663  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .Close
	I1206 20:01:01.835939  115217 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:01:01.835974  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | Closing plugin on server side
	I1206 20:01:01.835983  115217 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:01:01.872799  115217 main.go:141] libmachine: Making call to close driver server
	I1206 20:01:01.872835  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .Close
	I1206 20:01:01.873282  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | Closing plugin on server side
	I1206 20:01:01.873317  115217 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:01:01.873336  115217 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:01:02.258697  115217 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.635202106s)
	I1206 20:01:02.258754  115217 main.go:141] libmachine: Making call to close driver server
	I1206 20:01:02.258769  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .Close
	I1206 20:01:02.258773  115217 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.628450705s)
	I1206 20:01:02.258806  115217 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1206 20:01:02.259113  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | Closing plugin on server side
	I1206 20:01:02.260973  115217 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:01:02.261002  115217 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:01:02.261014  115217 main.go:141] libmachine: Making call to close driver server
	I1206 20:01:02.261025  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .Close
	I1206 20:01:02.261416  115217 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:01:02.261440  115217 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:01:02.261424  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | Closing plugin on server side
	I1206 20:01:02.375593  115217 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.309500554s)
	I1206 20:01:02.375659  115217 main.go:141] libmachine: Making call to close driver server
	I1206 20:01:02.375680  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .Close
	I1206 20:01:02.376064  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | Closing plugin on server side
	I1206 20:01:02.376155  115217 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:01:02.376168  115217 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:01:02.376185  115217 main.go:141] libmachine: Making call to close driver server
	I1206 20:01:02.376193  115217 main.go:141] libmachine: (old-k8s-version-448851) Calling .Close
	I1206 20:01:02.376522  115217 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:01:02.376532  115217 main.go:141] libmachine: (old-k8s-version-448851) DBG | Closing plugin on server side
	I1206 20:01:02.376543  115217 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:01:02.376559  115217 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-448851"
	I1206 20:01:02.378457  115217 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1206 20:01:02.380099  115217 addons.go:502] enable addons completed in 2.198162438s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1206 20:00:59.632971  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:01:00.133124  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:01:00.633148  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:01:01.132260  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:01:01.632323  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:01:02.132575  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:01:02.632268  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:01:03.132789  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:01:03.633155  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:01:04.132754  115591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 20:01:04.321130  115591 kubeadm.go:1088] duration metric: took 13.466729355s to wait for elevateKubeSystemPrivileges.
	I1206 20:01:04.321175  115591 kubeadm.go:406] StartCluster complete in 5m10.1110739s
	I1206 20:01:04.321200  115591 settings.go:142] acquiring lock: {Name:mkfeb988d43ca5824ac2b3af603600358ae0dd6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 20:01:04.321311  115591 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17740-63652/kubeconfig
	I1206 20:01:04.324158  115591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-63652/kubeconfig: {Name:mkb891a2b2c86b4a1b0f4bb8fd4e51233eb9c683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 20:01:04.324502  115591 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 20:01:04.324531  115591 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1206 20:01:04.324609  115591 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-209025"
	I1206 20:01:04.324633  115591 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-209025"
	W1206 20:01:04.324640  115591 addons.go:240] addon storage-provisioner should already be in state true
	I1206 20:01:04.324670  115591 addons.go:69] Setting default-storageclass=true in profile "embed-certs-209025"
	I1206 20:01:04.324699  115591 host.go:66] Checking if "embed-certs-209025" exists ...
	I1206 20:01:04.324702  115591 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-209025"
	I1206 20:01:04.324729  115591 config.go:182] Loaded profile config "embed-certs-209025": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 20:01:04.324799  115591 addons.go:69] Setting metrics-server=true in profile "embed-certs-209025"
	I1206 20:01:04.324813  115591 addons.go:231] Setting addon metrics-server=true in "embed-certs-209025"
	W1206 20:01:04.324820  115591 addons.go:240] addon metrics-server should already be in state true
	I1206 20:01:04.324858  115591 host.go:66] Checking if "embed-certs-209025" exists ...
	I1206 20:01:04.325100  115591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:01:04.325126  115591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:01:04.325127  115591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:01:04.325163  115591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:01:04.325191  115591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:01:04.325213  115591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:01:04.344127  115591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37257
	I1206 20:01:04.344361  115591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36921
	I1206 20:01:04.344866  115591 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:01:04.344978  115591 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:01:04.345615  115591 main.go:141] libmachine: Using API Version  1
	I1206 20:01:04.345635  115591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:01:04.345756  115591 main.go:141] libmachine: Using API Version  1
	I1206 20:01:04.345766  115591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:01:04.346201  115591 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:01:04.346772  115591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:01:04.346821  115591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:01:04.347367  115591 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:01:04.347741  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetState
	I1206 20:01:04.348264  115591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40295
	I1206 20:01:04.348754  115591 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:01:04.349655  115591 main.go:141] libmachine: Using API Version  1
	I1206 20:01:04.349676  115591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:01:04.350156  115591 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:01:04.352233  115591 addons.go:231] Setting addon default-storageclass=true in "embed-certs-209025"
	W1206 20:01:04.352257  115591 addons.go:240] addon default-storageclass should already be in state true
	I1206 20:01:04.352286  115591 host.go:66] Checking if "embed-certs-209025" exists ...
	I1206 20:01:04.352700  115591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:01:04.352734  115591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:01:04.353530  115591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:01:04.353563  115591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:01:04.365607  115591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40665
	I1206 20:01:04.366094  115591 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:01:04.366493  115591 main.go:141] libmachine: Using API Version  1
	I1206 20:01:04.366514  115591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:01:04.366780  115591 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:01:04.366908  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetState
	I1206 20:01:04.368611  115591 main.go:141] libmachine: (embed-certs-209025) Calling .DriverName
	I1206 20:01:04.370655  115591 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 20:01:04.372351  115591 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 20:01:04.372372  115591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33729
	I1206 20:01:04.372376  115591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 20:01:04.372402  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHHostname
	I1206 20:01:04.373021  115591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33983
	I1206 20:01:04.374446  115591 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:01:04.375104  115591 main.go:141] libmachine: Using API Version  1
	I1206 20:01:04.375126  115591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:01:04.375570  115591 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:01:04.375769  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetState
	I1206 20:01:04.376448  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 20:01:04.376851  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 20:01:04.376907  115591 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:01:04.377123  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 20:01:04.377377  115591 main.go:141] libmachine: (embed-certs-209025) Calling .DriverName
	I1206 20:01:04.377531  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHPort
	I1206 20:01:04.379514  115591 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1206 20:01:04.377862  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 20:01:04.378152  115591 main.go:141] libmachine: Using API Version  1
	I1206 20:01:04.381562  115591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:01:04.381682  115591 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1206 20:01:04.381700  115591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1206 20:01:04.381722  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHHostname
	I1206 20:01:04.382619  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHUsername
	I1206 20:01:04.382788  115591 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/embed-certs-209025/id_rsa Username:docker}
	I1206 20:01:04.383576  115591 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:01:04.384146  115591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:01:04.384176  115591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:01:04.386297  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 20:01:04.386684  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 20:01:04.386734  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 20:01:04.387477  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHPort
	I1206 20:01:04.387726  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 20:01:04.387913  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHUsername
	I1206 20:01:04.388055  115591 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/embed-certs-209025/id_rsa Username:docker}
	I1206 20:01:04.401629  115591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41467
	I1206 20:01:04.402214  115591 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:01:04.402804  115591 main.go:141] libmachine: Using API Version  1
	I1206 20:01:04.402826  115591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:01:04.403127  115591 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:01:04.403337  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetState
	I1206 20:01:04.405059  115591 main.go:141] libmachine: (embed-certs-209025) Calling .DriverName
	I1206 20:01:04.405404  115591 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 20:01:04.405427  115591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 20:01:04.405449  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHHostname
	I1206 20:01:04.408608  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 20:01:04.409145  115591 main.go:141] libmachine: (embed-certs-209025) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:27:5b", ip: ""} in network mk-embed-certs-209025: {Iface:virbr3 ExpiryTime:2023-12-06 20:55:37 +0000 UTC Type:0 Mac:52:54:00:4d:27:5b Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:embed-certs-209025 Clientid:01:52:54:00:4d:27:5b}
	I1206 20:01:04.409176  115591 main.go:141] libmachine: (embed-certs-209025) DBG | domain embed-certs-209025 has defined IP address 192.168.50.164 and MAC address 52:54:00:4d:27:5b in network mk-embed-certs-209025
	I1206 20:01:04.409443  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHPort
	I1206 20:01:04.409640  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHKeyPath
	I1206 20:01:04.409860  115591 main.go:141] libmachine: (embed-certs-209025) Calling .GetSSHUsername
	I1206 20:01:04.410016  115591 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/embed-certs-209025/id_rsa Username:docker}
	W1206 20:01:04.462788  115591 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "embed-certs-209025" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E1206 20:01:04.462843  115591 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I1206 20:01:04.462872  115591 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.164 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 20:01:04.464916  115591 out.go:177] * Verifying Kubernetes components...
	I1206 20:01:04.466388  115591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 20:01:01.039870  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:01:03.550944  115078 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace has status "Ready":"False"
	I1206 20:01:05.231905  115078 pod_ready.go:81] duration metric: took 4m0.001038985s waiting for pod "metrics-server-57f55c9bc5-vz7qc" in "kube-system" namespace to be "Ready" ...
	E1206 20:01:05.231950  115078 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1206 20:01:05.231962  115078 pod_ready.go:38] duration metric: took 4m4.801417566s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 20:01:05.231988  115078 api_server.go:52] waiting for apiserver process to appear ...
	I1206 20:01:05.232081  115078 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 20:01:05.232155  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 20:01:05.294538  115078 cri.go:89] found id: "f5b4ca951aec7e7912fa233772fa1e5a4cceffad95c297ea5b1b968ce835d2eb"
	I1206 20:01:05.294570  115078 cri.go:89] found id: ""
	I1206 20:01:05.294581  115078 logs.go:284] 1 containers: [f5b4ca951aec7e7912fa233772fa1e5a4cceffad95c297ea5b1b968ce835d2eb]
	I1206 20:01:05.294643  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:05.300221  115078 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 20:01:05.300300  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 20:01:05.359655  115078 cri.go:89] found id: "7633ca5afa8aeb44e4c450285edb3b4ca09bd881a87e959894d7740313a7d861"
	I1206 20:01:05.359685  115078 cri.go:89] found id: ""
	I1206 20:01:05.359696  115078 logs.go:284] 1 containers: [7633ca5afa8aeb44e4c450285edb3b4ca09bd881a87e959894d7740313a7d861]
	I1206 20:01:05.359759  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:05.364518  115078 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 20:01:05.364600  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 20:01:05.408448  115078 cri.go:89] found id: "93aee471c37fc8008cd5a0ce741944f249d9cbf7a5cbb62828369850236b0f07"
	I1206 20:01:05.408490  115078 cri.go:89] found id: ""
	I1206 20:01:05.408510  115078 logs.go:284] 1 containers: [93aee471c37fc8008cd5a0ce741944f249d9cbf7a5cbb62828369850236b0f07]
	I1206 20:01:05.408575  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:05.413345  115078 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 20:01:05.413428  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 20:01:05.462932  115078 cri.go:89] found id: "c00065611a1f7003ff8edd2ac629e3c6bbdfa4e1167b1dfd412ee16e5a9d3dcd"
	I1206 20:01:05.462960  115078 cri.go:89] found id: ""
	I1206 20:01:05.462971  115078 logs.go:284] 1 containers: [c00065611a1f7003ff8edd2ac629e3c6bbdfa4e1167b1dfd412ee16e5a9d3dcd]
	I1206 20:01:05.463034  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:05.468632  115078 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 20:01:05.468713  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 20:01:05.519690  115078 cri.go:89] found id: "0da9ad5d9749cba59bd11e3aa5dd332ad76b5bdd0fcde476ce333d4069f0e259"
	I1206 20:01:05.519720  115078 cri.go:89] found id: ""
	I1206 20:01:05.519731  115078 logs.go:284] 1 containers: [0da9ad5d9749cba59bd11e3aa5dd332ad76b5bdd0fcde476ce333d4069f0e259]
	I1206 20:01:05.519789  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:05.525847  115078 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 20:01:05.525933  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 20:01:05.580475  115078 cri.go:89] found id: "43c8e91cea581258f9bf13a71487b9ffadc02ac982f7ff08fa649092e171fa87"
	I1206 20:01:05.580537  115078 cri.go:89] found id: ""
	I1206 20:01:05.580550  115078 logs.go:284] 1 containers: [43c8e91cea581258f9bf13a71487b9ffadc02ac982f7ff08fa649092e171fa87]
	I1206 20:01:05.580623  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:05.585602  115078 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 20:01:05.585688  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 20:01:05.636350  115078 cri.go:89] found id: ""
	I1206 20:01:05.636383  115078 logs.go:284] 0 containers: []
	W1206 20:01:05.636394  115078 logs.go:286] No container was found matching "kindnet"
	I1206 20:01:05.636403  115078 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 20:01:05.636469  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 20:01:05.678819  115078 cri.go:89] found id: "ec1601a49c79cb7e81473ecbb6e4506b82fc3b4942e91392653a6991579c5617"
	I1206 20:01:05.678846  115078 cri.go:89] found id: "d07b3a050ef1964c57b85cc91a3d17646fef6d2298b91d72c007f432c51942c9"
	I1206 20:01:05.678853  115078 cri.go:89] found id: ""
	I1206 20:01:05.678863  115078 logs.go:284] 2 containers: [ec1601a49c79cb7e81473ecbb6e4506b82fc3b4942e91392653a6991579c5617 d07b3a050ef1964c57b85cc91a3d17646fef6d2298b91d72c007f432c51942c9]
	I1206 20:01:05.678929  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:05.683845  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:05.689989  115078 logs.go:123] Gathering logs for kube-scheduler [c00065611a1f7003ff8edd2ac629e3c6bbdfa4e1167b1dfd412ee16e5a9d3dcd] ...
	I1206 20:01:05.690021  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c00065611a1f7003ff8edd2ac629e3c6bbdfa4e1167b1dfd412ee16e5a9d3dcd"
	I1206 20:01:05.745510  115078 logs.go:123] Gathering logs for CRI-O ...
	I1206 20:01:05.745554  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 20:01:04.580869  115591 node_ready.go:35] waiting up to 6m0s for node "embed-certs-209025" to be "Ready" ...
	I1206 20:01:04.580933  115591 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 20:01:04.585219  115591 node_ready.go:49] node "embed-certs-209025" has status "Ready":"True"
	I1206 20:01:04.585267  115591 node_ready.go:38] duration metric: took 4.363508ms waiting for node "embed-certs-209025" to be "Ready" ...
	I1206 20:01:04.585281  115591 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 20:01:04.595166  115591 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-57z8q" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:04.611829  115591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 20:01:04.622127  115591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 20:01:04.628233  115591 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1206 20:01:04.628260  115591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1206 20:01:04.706473  115591 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1206 20:01:04.706498  115591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1206 20:01:04.790827  115591 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 20:01:04.790868  115591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1206 20:01:04.840367  115591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 20:01:06.312054  115591 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.73108071s)
	I1206 20:01:06.312092  115591 start.go:929] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1206 20:01:06.312099  115591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.700233834s)
	I1206 20:01:06.312147  115591 main.go:141] libmachine: Making call to close driver server
	I1206 20:01:06.312162  115591 main.go:141] libmachine: (embed-certs-209025) Calling .Close
	I1206 20:01:06.312503  115591 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:01:06.312519  115591 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:01:06.312531  115591 main.go:141] libmachine: Making call to close driver server
	I1206 20:01:06.312541  115591 main.go:141] libmachine: (embed-certs-209025) Calling .Close
	I1206 20:01:06.312895  115591 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:01:06.312985  115591 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:01:06.312952  115591 main.go:141] libmachine: (embed-certs-209025) DBG | Closing plugin on server side
	I1206 20:01:06.334314  115591 main.go:141] libmachine: Making call to close driver server
	I1206 20:01:06.334343  115591 main.go:141] libmachine: (embed-certs-209025) Calling .Close
	I1206 20:01:06.334719  115591 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:01:06.334742  115591 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:01:06.677046  115591 pod_ready.go:102] pod "coredns-5dd5756b68-57z8q" in "kube-system" namespace has status "Ready":"False"
	I1206 20:01:07.176051  115591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.553877678s)
	I1206 20:01:07.176112  115591 main.go:141] libmachine: Making call to close driver server
	I1206 20:01:07.176124  115591 main.go:141] libmachine: (embed-certs-209025) Calling .Close
	I1206 20:01:07.176520  115591 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:01:07.176551  115591 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:01:07.176570  115591 main.go:141] libmachine: Making call to close driver server
	I1206 20:01:07.176584  115591 main.go:141] libmachine: (embed-certs-209025) Calling .Close
	I1206 20:01:07.176859  115591 main.go:141] libmachine: (embed-certs-209025) DBG | Closing plugin on server side
	I1206 20:01:07.176852  115591 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:01:07.176884  115591 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:01:07.287377  115591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.446934189s)
	I1206 20:01:07.287525  115591 main.go:141] libmachine: Making call to close driver server
	I1206 20:01:07.287586  115591 main.go:141] libmachine: (embed-certs-209025) Calling .Close
	I1206 20:01:07.288055  115591 main.go:141] libmachine: (embed-certs-209025) DBG | Closing plugin on server side
	I1206 20:01:07.288055  115591 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:01:07.288082  115591 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:01:07.288096  115591 main.go:141] libmachine: Making call to close driver server
	I1206 20:01:07.288105  115591 main.go:141] libmachine: (embed-certs-209025) Calling .Close
	I1206 20:01:07.288358  115591 main.go:141] libmachine: Successfully made call to close driver server
	I1206 20:01:07.288372  115591 main.go:141] libmachine: Making call to close connection to plugin binary
	I1206 20:01:07.288384  115591 addons.go:467] Verifying addon metrics-server=true in "embed-certs-209025"
	I1206 20:01:07.291120  115591 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1206 20:01:03.100131  115217 pod_ready.go:102] pod "coredns-5644d7b6d9-2nncf" in "kube-system" namespace has status "Ready":"False"
	I1206 20:01:05.107571  115217 pod_ready.go:102] pod "coredns-5644d7b6d9-2nncf" in "kube-system" namespace has status "Ready":"False"
	I1206 20:01:07.599078  115217 pod_ready.go:102] pod "coredns-5644d7b6d9-2nncf" in "kube-system" namespace has status "Ready":"False"
	I1206 20:01:07.292151  115591 addons.go:502] enable addons completed in 2.967619291s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1206 20:01:09.122709  115591 pod_ready.go:102] pod "coredns-5dd5756b68-57z8q" in "kube-system" namespace has status "Ready":"False"
	I1206 20:01:06.258156  115078 logs.go:123] Gathering logs for container status ...
	I1206 20:01:06.258193  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 20:01:06.321049  115078 logs.go:123] Gathering logs for kubelet ...
	I1206 20:01:06.321084  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 20:01:06.376243  115078 logs.go:123] Gathering logs for etcd [7633ca5afa8aeb44e4c450285edb3b4ca09bd881a87e959894d7740313a7d861] ...
	I1206 20:01:06.376281  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7633ca5afa8aeb44e4c450285edb3b4ca09bd881a87e959894d7740313a7d861"
	I1206 20:01:06.441701  115078 logs.go:123] Gathering logs for coredns [93aee471c37fc8008cd5a0ce741944f249d9cbf7a5cbb62828369850236b0f07] ...
	I1206 20:01:06.441742  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93aee471c37fc8008cd5a0ce741944f249d9cbf7a5cbb62828369850236b0f07"
	I1206 20:01:06.493399  115078 logs.go:123] Gathering logs for kube-proxy [0da9ad5d9749cba59bd11e3aa5dd332ad76b5bdd0fcde476ce333d4069f0e259] ...
	I1206 20:01:06.493440  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0da9ad5d9749cba59bd11e3aa5dd332ad76b5bdd0fcde476ce333d4069f0e259"
	I1206 20:01:06.545681  115078 logs.go:123] Gathering logs for storage-provisioner [d07b3a050ef1964c57b85cc91a3d17646fef6d2298b91d72c007f432c51942c9] ...
	I1206 20:01:06.545717  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d07b3a050ef1964c57b85cc91a3d17646fef6d2298b91d72c007f432c51942c9"
	I1206 20:01:06.602830  115078 logs.go:123] Gathering logs for dmesg ...
	I1206 20:01:06.602864  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 20:01:06.618874  115078 logs.go:123] Gathering logs for kube-controller-manager [43c8e91cea581258f9bf13a71487b9ffadc02ac982f7ff08fa649092e171fa87] ...
	I1206 20:01:06.618903  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 43c8e91cea581258f9bf13a71487b9ffadc02ac982f7ff08fa649092e171fa87"
	I1206 20:01:06.694329  115078 logs.go:123] Gathering logs for storage-provisioner [ec1601a49c79cb7e81473ecbb6e4506b82fc3b4942e91392653a6991579c5617] ...
	I1206 20:01:06.694375  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec1601a49c79cb7e81473ecbb6e4506b82fc3b4942e91392653a6991579c5617"
	I1206 20:01:06.748217  115078 logs.go:123] Gathering logs for describe nodes ...
	I1206 20:01:06.748255  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1206 20:01:06.933616  115078 logs.go:123] Gathering logs for kube-apiserver [f5b4ca951aec7e7912fa233772fa1e5a4cceffad95c297ea5b1b968ce835d2eb] ...
	I1206 20:01:06.933655  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5b4ca951aec7e7912fa233772fa1e5a4cceffad95c297ea5b1b968ce835d2eb"
	I1206 20:01:09.511340  115078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 20:01:09.530228  115078 api_server.go:72] duration metric: took 4m16.464196787s to wait for apiserver process to appear ...
	I1206 20:01:09.530254  115078 api_server.go:88] waiting for apiserver healthz status ...
	I1206 20:01:09.530295  115078 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 20:01:09.530357  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 20:01:09.574265  115078 cri.go:89] found id: "f5b4ca951aec7e7912fa233772fa1e5a4cceffad95c297ea5b1b968ce835d2eb"
	I1206 20:01:09.574301  115078 cri.go:89] found id: ""
	I1206 20:01:09.574313  115078 logs.go:284] 1 containers: [f5b4ca951aec7e7912fa233772fa1e5a4cceffad95c297ea5b1b968ce835d2eb]
	I1206 20:01:09.574377  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:09.579240  115078 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 20:01:09.579310  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 20:01:09.622512  115078 cri.go:89] found id: "7633ca5afa8aeb44e4c450285edb3b4ca09bd881a87e959894d7740313a7d861"
	I1206 20:01:09.622540  115078 cri.go:89] found id: ""
	I1206 20:01:09.622551  115078 logs.go:284] 1 containers: [7633ca5afa8aeb44e4c450285edb3b4ca09bd881a87e959894d7740313a7d861]
	I1206 20:01:09.622619  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:09.627770  115078 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 20:01:09.627847  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 20:01:09.675976  115078 cri.go:89] found id: "93aee471c37fc8008cd5a0ce741944f249d9cbf7a5cbb62828369850236b0f07"
	I1206 20:01:09.676007  115078 cri.go:89] found id: ""
	I1206 20:01:09.676018  115078 logs.go:284] 1 containers: [93aee471c37fc8008cd5a0ce741944f249d9cbf7a5cbb62828369850236b0f07]
	I1206 20:01:09.676082  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:09.680750  115078 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 20:01:09.680824  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 20:01:09.721081  115078 cri.go:89] found id: "c00065611a1f7003ff8edd2ac629e3c6bbdfa4e1167b1dfd412ee16e5a9d3dcd"
	I1206 20:01:09.721108  115078 cri.go:89] found id: ""
	I1206 20:01:09.721119  115078 logs.go:284] 1 containers: [c00065611a1f7003ff8edd2ac629e3c6bbdfa4e1167b1dfd412ee16e5a9d3dcd]
	I1206 20:01:09.721181  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:09.725501  115078 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 20:01:09.725568  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 20:01:09.777674  115078 cri.go:89] found id: "0da9ad5d9749cba59bd11e3aa5dd332ad76b5bdd0fcde476ce333d4069f0e259"
	I1206 20:01:09.777700  115078 cri.go:89] found id: ""
	I1206 20:01:09.777709  115078 logs.go:284] 1 containers: [0da9ad5d9749cba59bd11e3aa5dd332ad76b5bdd0fcde476ce333d4069f0e259]
	I1206 20:01:09.777767  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:09.782475  115078 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 20:01:09.782558  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 20:01:09.833889  115078 cri.go:89] found id: "43c8e91cea581258f9bf13a71487b9ffadc02ac982f7ff08fa649092e171fa87"
	I1206 20:01:09.833916  115078 cri.go:89] found id: ""
	I1206 20:01:09.833926  115078 logs.go:284] 1 containers: [43c8e91cea581258f9bf13a71487b9ffadc02ac982f7ff08fa649092e171fa87]
	I1206 20:01:09.833985  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:09.838897  115078 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 20:01:09.838977  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 20:01:09.880892  115078 cri.go:89] found id: ""
	I1206 20:01:09.880923  115078 logs.go:284] 0 containers: []
	W1206 20:01:09.880934  115078 logs.go:286] No container was found matching "kindnet"
	I1206 20:01:09.880943  115078 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 20:01:09.881011  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 20:01:09.924025  115078 cri.go:89] found id: "ec1601a49c79cb7e81473ecbb6e4506b82fc3b4942e91392653a6991579c5617"
	I1206 20:01:09.924058  115078 cri.go:89] found id: "d07b3a050ef1964c57b85cc91a3d17646fef6d2298b91d72c007f432c51942c9"
	I1206 20:01:09.924065  115078 cri.go:89] found id: ""
	I1206 20:01:09.924075  115078 logs.go:284] 2 containers: [ec1601a49c79cb7e81473ecbb6e4506b82fc3b4942e91392653a6991579c5617 d07b3a050ef1964c57b85cc91a3d17646fef6d2298b91d72c007f432c51942c9]
	I1206 20:01:09.924142  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:09.928667  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:09.933112  115078 logs.go:123] Gathering logs for dmesg ...
	I1206 20:01:09.933134  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 20:01:09.949212  115078 logs.go:123] Gathering logs for etcd [7633ca5afa8aeb44e4c450285edb3b4ca09bd881a87e959894d7740313a7d861] ...
	I1206 20:01:09.949254  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7633ca5afa8aeb44e4c450285edb3b4ca09bd881a87e959894d7740313a7d861"
	I1206 20:01:09.996227  115078 logs.go:123] Gathering logs for container status ...
	I1206 20:01:09.996261  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 20:01:10.046607  115078 logs.go:123] Gathering logs for kubelet ...
	I1206 20:01:10.046645  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 20:01:10.102171  115078 logs.go:123] Gathering logs for kube-controller-manager [43c8e91cea581258f9bf13a71487b9ffadc02ac982f7ff08fa649092e171fa87] ...
	I1206 20:01:10.102214  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 43c8e91cea581258f9bf13a71487b9ffadc02ac982f7ff08fa649092e171fa87"
	I1206 20:01:10.160600  115078 logs.go:123] Gathering logs for storage-provisioner [ec1601a49c79cb7e81473ecbb6e4506b82fc3b4942e91392653a6991579c5617] ...
	I1206 20:01:10.160641  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec1601a49c79cb7e81473ecbb6e4506b82fc3b4942e91392653a6991579c5617"
	I1206 20:01:10.203673  115078 logs.go:123] Gathering logs for CRI-O ...
	I1206 20:01:10.203709  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 20:01:10.681783  115078 logs.go:123] Gathering logs for describe nodes ...
	I1206 20:01:10.681824  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1206 20:01:10.813061  115078 logs.go:123] Gathering logs for kube-proxy [0da9ad5d9749cba59bd11e3aa5dd332ad76b5bdd0fcde476ce333d4069f0e259] ...
	I1206 20:01:10.813102  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0da9ad5d9749cba59bd11e3aa5dd332ad76b5bdd0fcde476ce333d4069f0e259"
	I1206 20:01:10.857895  115078 logs.go:123] Gathering logs for storage-provisioner [d07b3a050ef1964c57b85cc91a3d17646fef6d2298b91d72c007f432c51942c9] ...
	I1206 20:01:10.857930  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d07b3a050ef1964c57b85cc91a3d17646fef6d2298b91d72c007f432c51942c9"
	I1206 20:01:10.904589  115078 logs.go:123] Gathering logs for kube-apiserver [f5b4ca951aec7e7912fa233772fa1e5a4cceffad95c297ea5b1b968ce835d2eb] ...
	I1206 20:01:10.904625  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5b4ca951aec7e7912fa233772fa1e5a4cceffad95c297ea5b1b968ce835d2eb"
	I1206 20:01:10.957570  115078 logs.go:123] Gathering logs for kube-scheduler [c00065611a1f7003ff8edd2ac629e3c6bbdfa4e1167b1dfd412ee16e5a9d3dcd] ...
	I1206 20:01:10.957608  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c00065611a1f7003ff8edd2ac629e3c6bbdfa4e1167b1dfd412ee16e5a9d3dcd"
	I1206 20:01:09.624997  115591 pod_ready.go:92] pod "coredns-5dd5756b68-57z8q" in "kube-system" namespace has status "Ready":"True"
	I1206 20:01:09.625025  115591 pod_ready.go:81] duration metric: took 5.029829059s waiting for pod "coredns-5dd5756b68-57z8q" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:09.625038  115591 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-8lsns" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:09.632534  115591 pod_ready.go:92] pod "coredns-5dd5756b68-8lsns" in "kube-system" namespace has status "Ready":"True"
	I1206 20:01:09.632561  115591 pod_ready.go:81] duration metric: took 7.514952ms waiting for pod "coredns-5dd5756b68-8lsns" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:09.632574  115591 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:09.642077  115591 pod_ready.go:92] pod "etcd-embed-certs-209025" in "kube-system" namespace has status "Ready":"True"
	I1206 20:01:09.642107  115591 pod_ready.go:81] duration metric: took 9.52505ms waiting for pod "etcd-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:09.642121  115591 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:09.648636  115591 pod_ready.go:92] pod "kube-apiserver-embed-certs-209025" in "kube-system" namespace has status "Ready":"True"
	I1206 20:01:09.648658  115591 pod_ready.go:81] duration metric: took 6.530394ms waiting for pod "kube-apiserver-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:09.648667  115591 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:09.656534  115591 pod_ready.go:92] pod "kube-controller-manager-embed-certs-209025" in "kube-system" namespace has status "Ready":"True"
	I1206 20:01:09.656561  115591 pod_ready.go:81] duration metric: took 7.887248ms waiting for pod "kube-controller-manager-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:09.656573  115591 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nf2cw" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:10.019281  115591 pod_ready.go:92] pod "kube-proxy-nf2cw" in "kube-system" namespace has status "Ready":"True"
	I1206 20:01:10.019310  115591 pod_ready.go:81] duration metric: took 362.727602ms waiting for pod "kube-proxy-nf2cw" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:10.019323  115591 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:10.419938  115591 pod_ready.go:92] pod "kube-scheduler-embed-certs-209025" in "kube-system" namespace has status "Ready":"True"
	I1206 20:01:10.419971  115591 pod_ready.go:81] duration metric: took 400.640145ms waiting for pod "kube-scheduler-embed-certs-209025" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:10.419982  115591 pod_ready.go:38] duration metric: took 5.834689614s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 20:01:10.420000  115591 api_server.go:52] waiting for apiserver process to appear ...
	I1206 20:01:10.420062  115591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 20:01:10.436691  115591 api_server.go:72] duration metric: took 5.973781556s to wait for apiserver process to appear ...
	I1206 20:01:10.436723  115591 api_server.go:88] waiting for apiserver healthz status ...
	I1206 20:01:10.436746  115591 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8443/healthz ...
	I1206 20:01:10.442876  115591 api_server.go:279] https://192.168.50.164:8443/healthz returned 200:
	ok
	I1206 20:01:10.444774  115591 api_server.go:141] control plane version: v1.28.4
	I1206 20:01:10.444798  115591 api_server.go:131] duration metric: took 8.067787ms to wait for apiserver health ...
	I1206 20:01:10.444808  115591 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 20:01:10.624219  115591 system_pods.go:59] 9 kube-system pods found
	I1206 20:01:10.624251  115591 system_pods.go:61] "coredns-5dd5756b68-57z8q" [24c81a49-d80e-47df-86d2-0056ccc25858] Running
	I1206 20:01:10.624256  115591 system_pods.go:61] "coredns-5dd5756b68-8lsns" [14c5f16e-0c30-4602-b772-c6e0c8a577a8] Running
	I1206 20:01:10.624260  115591 system_pods.go:61] "etcd-embed-certs-209025" [e352dba2-c22b-4b21-9cb7-d641d29307a0] Running
	I1206 20:01:10.624264  115591 system_pods.go:61] "kube-apiserver-embed-certs-209025" [b4bfe0d1-0f1f-4e5e-96a4-94ec19cc1ab4] Running
	I1206 20:01:10.624268  115591 system_pods.go:61] "kube-controller-manager-embed-certs-209025" [1e9819fc-0187-4410-97f5-a517fb6b6595] Running
	I1206 20:01:10.624272  115591 system_pods.go:61] "kube-proxy-nf2cw" [5e49b3f8-7eee-4c04-ae22-75ccd216bb27] Running
	I1206 20:01:10.624275  115591 system_pods.go:61] "kube-scheduler-embed-certs-209025" [cc5d4d6f-515d-48b9-8d6f-83c33b0fa037] Running
	I1206 20:01:10.624282  115591 system_pods.go:61] "metrics-server-57f55c9bc5-5qxxj" [4eaddb4b-aec0-4cc7-b467-bb882bcba8a0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:10.624286  115591 system_pods.go:61] "storage-provisioner" [2417fc35-04fd-4dcf-9d16-2649a0d3bb3b] Running
	I1206 20:01:10.624296  115591 system_pods.go:74] duration metric: took 179.481721ms to wait for pod list to return data ...
	I1206 20:01:10.624306  115591 default_sa.go:34] waiting for default service account to be created ...
	I1206 20:01:10.818715  115591 default_sa.go:45] found service account: "default"
	I1206 20:01:10.818741  115591 default_sa.go:55] duration metric: took 194.428895ms for default service account to be created ...
	I1206 20:01:10.818750  115591 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 20:01:11.022686  115591 system_pods.go:86] 9 kube-system pods found
	I1206 20:01:11.022713  115591 system_pods.go:89] "coredns-5dd5756b68-57z8q" [24c81a49-d80e-47df-86d2-0056ccc25858] Running
	I1206 20:01:11.022718  115591 system_pods.go:89] "coredns-5dd5756b68-8lsns" [14c5f16e-0c30-4602-b772-c6e0c8a577a8] Running
	I1206 20:01:11.022722  115591 system_pods.go:89] "etcd-embed-certs-209025" [e352dba2-c22b-4b21-9cb7-d641d29307a0] Running
	I1206 20:01:11.022726  115591 system_pods.go:89] "kube-apiserver-embed-certs-209025" [b4bfe0d1-0f1f-4e5e-96a4-94ec19cc1ab4] Running
	I1206 20:01:11.022730  115591 system_pods.go:89] "kube-controller-manager-embed-certs-209025" [1e9819fc-0187-4410-97f5-a517fb6b6595] Running
	I1206 20:01:11.022734  115591 system_pods.go:89] "kube-proxy-nf2cw" [5e49b3f8-7eee-4c04-ae22-75ccd216bb27] Running
	I1206 20:01:11.022738  115591 system_pods.go:89] "kube-scheduler-embed-certs-209025" [cc5d4d6f-515d-48b9-8d6f-83c33b0fa037] Running
	I1206 20:01:11.022744  115591 system_pods.go:89] "metrics-server-57f55c9bc5-5qxxj" [4eaddb4b-aec0-4cc7-b467-bb882bcba8a0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:11.022750  115591 system_pods.go:89] "storage-provisioner" [2417fc35-04fd-4dcf-9d16-2649a0d3bb3b] Running
	I1206 20:01:11.022762  115591 system_pods.go:126] duration metric: took 204.004835ms to wait for k8s-apps to be running ...
	I1206 20:01:11.022774  115591 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 20:01:11.022824  115591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 20:01:11.041212  115591 system_svc.go:56] duration metric: took 18.424469ms WaitForService to wait for kubelet.
	I1206 20:01:11.041256  115591 kubeadm.go:581] duration metric: took 6.578354937s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1206 20:01:11.041291  115591 node_conditions.go:102] verifying NodePressure condition ...
	I1206 20:01:11.219045  115591 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1206 20:01:11.219079  115591 node_conditions.go:123] node cpu capacity is 2
	I1206 20:01:11.219094  115591 node_conditions.go:105] duration metric: took 177.793737ms to run NodePressure ...
	I1206 20:01:11.219107  115591 start.go:228] waiting for startup goroutines ...
	I1206 20:01:11.219113  115591 start.go:233] waiting for cluster config update ...
	I1206 20:01:11.219125  115591 start.go:242] writing updated cluster config ...
	I1206 20:01:11.219482  115591 ssh_runner.go:195] Run: rm -f paused
	I1206 20:01:11.275863  115591 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1206 20:01:11.278074  115591 out.go:177] * Done! kubectl is now configured to use "embed-certs-209025" cluster and "default" namespace by default
	I1206 20:01:09.099590  115217 pod_ready.go:92] pod "coredns-5644d7b6d9-2nncf" in "kube-system" namespace has status "Ready":"True"
	I1206 20:01:09.099616  115217 pod_ready.go:81] duration metric: took 8.363590309s waiting for pod "coredns-5644d7b6d9-2nncf" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:09.099626  115217 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-f627j" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:09.103452  115217 pod_ready.go:97] error getting pod "coredns-5644d7b6d9-f627j" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-f627j" not found
	I1206 20:01:09.103485  115217 pod_ready.go:81] duration metric: took 3.845902ms waiting for pod "coredns-5644d7b6d9-f627j" in "kube-system" namespace to be "Ready" ...
	E1206 20:01:09.103499  115217 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5644d7b6d9-f627j" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-f627j" not found
	I1206 20:01:09.103507  115217 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wvqmw" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:09.110700  115217 pod_ready.go:92] pod "kube-proxy-wvqmw" in "kube-system" namespace has status "Ready":"True"
	I1206 20:01:09.110721  115217 pod_ready.go:81] duration metric: took 7.207091ms waiting for pod "kube-proxy-wvqmw" in "kube-system" namespace to be "Ready" ...
	I1206 20:01:09.110729  115217 pod_ready.go:38] duration metric: took 8.477100108s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 20:01:09.110744  115217 api_server.go:52] waiting for apiserver process to appear ...
	I1206 20:01:09.110791  115217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 20:01:09.127244  115217 api_server.go:72] duration metric: took 8.855777965s to wait for apiserver process to appear ...
	I1206 20:01:09.127272  115217 api_server.go:88] waiting for apiserver healthz status ...
	I1206 20:01:09.127290  115217 api_server.go:253] Checking apiserver healthz at https://192.168.61.33:8443/healthz ...
	I1206 20:01:09.134411  115217 api_server.go:279] https://192.168.61.33:8443/healthz returned 200:
	ok
	I1206 20:01:09.135553  115217 api_server.go:141] control plane version: v1.16.0
	I1206 20:01:09.135578  115217 api_server.go:131] duration metric: took 8.298936ms to wait for apiserver health ...
	I1206 20:01:09.135589  115217 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 20:01:09.140145  115217 system_pods.go:59] 4 kube-system pods found
	I1206 20:01:09.140167  115217 system_pods.go:61] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:09.140172  115217 system_pods.go:61] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:09.140178  115217 system_pods.go:61] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:09.140183  115217 system_pods.go:61] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:09.140191  115217 system_pods.go:74] duration metric: took 4.595695ms to wait for pod list to return data ...
	I1206 20:01:09.140198  115217 default_sa.go:34] waiting for default service account to be created ...
	I1206 20:01:09.142852  115217 default_sa.go:45] found service account: "default"
	I1206 20:01:09.142877  115217 default_sa.go:55] duration metric: took 2.67139ms for default service account to be created ...
	I1206 20:01:09.142888  115217 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 20:01:09.145800  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:09.145822  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:09.145827  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:09.145833  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:09.145838  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:09.145856  115217 retry.go:31] will retry after 199.361191ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:09.351430  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:09.351475  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:09.351485  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:09.351497  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:09.351504  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:09.351529  115217 retry.go:31] will retry after 239.084983ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:09.595441  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:09.595479  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:09.595487  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:09.595498  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:09.595506  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:09.595528  115217 retry.go:31] will retry after 380.909676ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:09.982061  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:09.982088  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:09.982093  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:09.982101  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:09.982115  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:09.982133  115217 retry.go:31] will retry after 451.472574ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:10.439270  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:10.439303  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:10.439311  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:10.439321  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:10.439328  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:10.439350  115217 retry.go:31] will retry after 654.845182ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:11.101088  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:11.101129  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:11.101137  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:11.101147  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:11.101155  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:11.101178  115217 retry.go:31] will retry after 650.939663ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:11.757024  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:11.757053  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:11.757058  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:11.757065  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:11.757070  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:11.757088  115217 retry.go:31] will retry after 828.555469ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:12.591156  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:12.591193  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:12.591209  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:12.591220  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:12.591227  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:12.591254  115217 retry.go:31] will retry after 1.26518336s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:11.000472  115078 logs.go:123] Gathering logs for coredns [93aee471c37fc8008cd5a0ce741944f249d9cbf7a5cbb62828369850236b0f07] ...
	I1206 20:01:11.000505  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93aee471c37fc8008cd5a0ce741944f249d9cbf7a5cbb62828369850236b0f07"
	I1206 20:01:13.545345  115078 api_server.go:253] Checking apiserver healthz at https://192.168.39.5:8443/healthz ...
	I1206 20:01:13.551262  115078 api_server.go:279] https://192.168.39.5:8443/healthz returned 200:
	ok
	I1206 20:01:13.553129  115078 api_server.go:141] control plane version: v1.29.0-rc.1
	I1206 20:01:13.553161  115078 api_server.go:131] duration metric: took 4.022898619s to wait for apiserver health ...
	I1206 20:01:13.553173  115078 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 20:01:13.553204  115078 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 20:01:13.553287  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 20:01:13.619861  115078 cri.go:89] found id: "f5b4ca951aec7e7912fa233772fa1e5a4cceffad95c297ea5b1b968ce835d2eb"
	I1206 20:01:13.619892  115078 cri.go:89] found id: ""
	I1206 20:01:13.619903  115078 logs.go:284] 1 containers: [f5b4ca951aec7e7912fa233772fa1e5a4cceffad95c297ea5b1b968ce835d2eb]
	I1206 20:01:13.619994  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:13.625028  115078 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 20:01:13.625099  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 20:01:13.667275  115078 cri.go:89] found id: "7633ca5afa8aeb44e4c450285edb3b4ca09bd881a87e959894d7740313a7d861"
	I1206 20:01:13.667300  115078 cri.go:89] found id: ""
	I1206 20:01:13.667309  115078 logs.go:284] 1 containers: [7633ca5afa8aeb44e4c450285edb3b4ca09bd881a87e959894d7740313a7d861]
	I1206 20:01:13.667378  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:13.671673  115078 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 20:01:13.671740  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 20:01:13.713319  115078 cri.go:89] found id: "93aee471c37fc8008cd5a0ce741944f249d9cbf7a5cbb62828369850236b0f07"
	I1206 20:01:13.713351  115078 cri.go:89] found id: ""
	I1206 20:01:13.713361  115078 logs.go:284] 1 containers: [93aee471c37fc8008cd5a0ce741944f249d9cbf7a5cbb62828369850236b0f07]
	I1206 20:01:13.713428  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:13.718155  115078 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 20:01:13.718219  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 20:01:13.758383  115078 cri.go:89] found id: "c00065611a1f7003ff8edd2ac629e3c6bbdfa4e1167b1dfd412ee16e5a9d3dcd"
	I1206 20:01:13.758414  115078 cri.go:89] found id: ""
	I1206 20:01:13.758424  115078 logs.go:284] 1 containers: [c00065611a1f7003ff8edd2ac629e3c6bbdfa4e1167b1dfd412ee16e5a9d3dcd]
	I1206 20:01:13.758488  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:13.762747  115078 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 20:01:13.762826  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 20:01:13.803602  115078 cri.go:89] found id: "0da9ad5d9749cba59bd11e3aa5dd332ad76b5bdd0fcde476ce333d4069f0e259"
	I1206 20:01:13.803627  115078 cri.go:89] found id: ""
	I1206 20:01:13.803635  115078 logs.go:284] 1 containers: [0da9ad5d9749cba59bd11e3aa5dd332ad76b5bdd0fcde476ce333d4069f0e259]
	I1206 20:01:13.803685  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:13.808083  115078 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 20:01:13.808160  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 20:01:13.852504  115078 cri.go:89] found id: "43c8e91cea581258f9bf13a71487b9ffadc02ac982f7ff08fa649092e171fa87"
	I1206 20:01:13.852531  115078 cri.go:89] found id: ""
	I1206 20:01:13.852539  115078 logs.go:284] 1 containers: [43c8e91cea581258f9bf13a71487b9ffadc02ac982f7ff08fa649092e171fa87]
	I1206 20:01:13.852598  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:13.857213  115078 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 20:01:13.857322  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 20:01:13.896981  115078 cri.go:89] found id: ""
	I1206 20:01:13.897023  115078 logs.go:284] 0 containers: []
	W1206 20:01:13.897035  115078 logs.go:286] No container was found matching "kindnet"
	I1206 20:01:13.897044  115078 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 20:01:13.897110  115078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 20:01:13.940969  115078 cri.go:89] found id: "ec1601a49c79cb7e81473ecbb6e4506b82fc3b4942e91392653a6991579c5617"
	I1206 20:01:13.940996  115078 cri.go:89] found id: "d07b3a050ef1964c57b85cc91a3d17646fef6d2298b91d72c007f432c51942c9"
	I1206 20:01:13.941004  115078 cri.go:89] found id: ""
	I1206 20:01:13.941013  115078 logs.go:284] 2 containers: [ec1601a49c79cb7e81473ecbb6e4506b82fc3b4942e91392653a6991579c5617 d07b3a050ef1964c57b85cc91a3d17646fef6d2298b91d72c007f432c51942c9]
	I1206 20:01:13.941075  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:13.945508  115078 ssh_runner.go:195] Run: which crictl
	I1206 20:01:13.949933  115078 logs.go:123] Gathering logs for kube-scheduler [c00065611a1f7003ff8edd2ac629e3c6bbdfa4e1167b1dfd412ee16e5a9d3dcd] ...
	I1206 20:01:13.949961  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c00065611a1f7003ff8edd2ac629e3c6bbdfa4e1167b1dfd412ee16e5a9d3dcd"
	I1206 20:01:13.986034  115078 logs.go:123] Gathering logs for kube-controller-manager [43c8e91cea581258f9bf13a71487b9ffadc02ac982f7ff08fa649092e171fa87] ...
	I1206 20:01:13.986065  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 43c8e91cea581258f9bf13a71487b9ffadc02ac982f7ff08fa649092e171fa87"
	I1206 20:01:14.045155  115078 logs.go:123] Gathering logs for storage-provisioner [ec1601a49c79cb7e81473ecbb6e4506b82fc3b4942e91392653a6991579c5617] ...
	I1206 20:01:14.045197  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec1601a49c79cb7e81473ecbb6e4506b82fc3b4942e91392653a6991579c5617"
	I1206 20:01:14.091205  115078 logs.go:123] Gathering logs for storage-provisioner [d07b3a050ef1964c57b85cc91a3d17646fef6d2298b91d72c007f432c51942c9] ...
	I1206 20:01:14.091240  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d07b3a050ef1964c57b85cc91a3d17646fef6d2298b91d72c007f432c51942c9"
	I1206 20:01:14.130184  115078 logs.go:123] Gathering logs for container status ...
	I1206 20:01:14.130221  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 20:01:14.176981  115078 logs.go:123] Gathering logs for dmesg ...
	I1206 20:01:14.177024  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 20:01:14.191755  115078 logs.go:123] Gathering logs for describe nodes ...
	I1206 20:01:14.191796  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1206 20:01:14.316375  115078 logs.go:123] Gathering logs for etcd [7633ca5afa8aeb44e4c450285edb3b4ca09bd881a87e959894d7740313a7d861] ...
	I1206 20:01:14.316413  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7633ca5afa8aeb44e4c450285edb3b4ca09bd881a87e959894d7740313a7d861"
	I1206 20:01:14.359700  115078 logs.go:123] Gathering logs for kubelet ...
	I1206 20:01:14.359746  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 20:01:14.415906  115078 logs.go:123] Gathering logs for kube-apiserver [f5b4ca951aec7e7912fa233772fa1e5a4cceffad95c297ea5b1b968ce835d2eb] ...
	I1206 20:01:14.415952  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5b4ca951aec7e7912fa233772fa1e5a4cceffad95c297ea5b1b968ce835d2eb"
	I1206 20:01:14.471453  115078 logs.go:123] Gathering logs for kube-proxy [0da9ad5d9749cba59bd11e3aa5dd332ad76b5bdd0fcde476ce333d4069f0e259] ...
	I1206 20:01:14.471496  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0da9ad5d9749cba59bd11e3aa5dd332ad76b5bdd0fcde476ce333d4069f0e259"
	I1206 20:01:14.520012  115078 logs.go:123] Gathering logs for coredns [93aee471c37fc8008cd5a0ce741944f249d9cbf7a5cbb62828369850236b0f07] ...
	I1206 20:01:14.520051  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93aee471c37fc8008cd5a0ce741944f249d9cbf7a5cbb62828369850236b0f07"
	I1206 20:01:14.567445  115078 logs.go:123] Gathering logs for CRI-O ...
	I1206 20:01:14.567482  115078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 20:01:17.434636  115078 system_pods.go:59] 8 kube-system pods found
	I1206 20:01:17.434671  115078 system_pods.go:61] "coredns-76f75df574-h9pkz" [05501356-bf9b-4a99-a1b9-40d0caef38db] Running
	I1206 20:01:17.434676  115078 system_pods.go:61] "etcd-no-preload-989559" [6c1cb748-a6a8-4583-b8fd-adf37e05b771] Running
	I1206 20:01:17.434680  115078 system_pods.go:61] "kube-apiserver-no-preload-989559" [51d8b7c6-0cef-4832-96b2-5040c0725310] Running
	I1206 20:01:17.434685  115078 system_pods.go:61] "kube-controller-manager-no-preload-989559" [cc8dfb88-9990-488f-9150-5c643143dcf1] Running
	I1206 20:01:17.434688  115078 system_pods.go:61] "kube-proxy-zgqvt" [550b2491-c14f-47c4-82d5-1301fa351305] Running
	I1206 20:01:17.434692  115078 system_pods.go:61] "kube-scheduler-no-preload-989559" [53a5031e-51aa-4867-88ff-1c7972a0cfa7] Running
	I1206 20:01:17.434700  115078 system_pods.go:61] "metrics-server-57f55c9bc5-vz7qc" [97c1bcd2-eabc-4029-bb02-5bbfd4d96c0f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:17.434706  115078 system_pods.go:61] "storage-provisioner" [c4d98de3-12ec-47f6-a6a6-f1dc61b479be] Running
	I1206 20:01:17.434714  115078 system_pods.go:74] duration metric: took 3.881535405s to wait for pod list to return data ...
	I1206 20:01:17.434724  115078 default_sa.go:34] waiting for default service account to be created ...
	I1206 20:01:17.437744  115078 default_sa.go:45] found service account: "default"
	I1206 20:01:17.437770  115078 default_sa.go:55] duration metric: took 3.038532ms for default service account to be created ...
	I1206 20:01:17.437780  115078 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 20:01:17.444539  115078 system_pods.go:86] 8 kube-system pods found
	I1206 20:01:17.444567  115078 system_pods.go:89] "coredns-76f75df574-h9pkz" [05501356-bf9b-4a99-a1b9-40d0caef38db] Running
	I1206 20:01:17.444572  115078 system_pods.go:89] "etcd-no-preload-989559" [6c1cb748-a6a8-4583-b8fd-adf37e05b771] Running
	I1206 20:01:17.444577  115078 system_pods.go:89] "kube-apiserver-no-preload-989559" [51d8b7c6-0cef-4832-96b2-5040c0725310] Running
	I1206 20:01:17.444583  115078 system_pods.go:89] "kube-controller-manager-no-preload-989559" [cc8dfb88-9990-488f-9150-5c643143dcf1] Running
	I1206 20:01:17.444587  115078 system_pods.go:89] "kube-proxy-zgqvt" [550b2491-c14f-47c4-82d5-1301fa351305] Running
	I1206 20:01:17.444592  115078 system_pods.go:89] "kube-scheduler-no-preload-989559" [53a5031e-51aa-4867-88ff-1c7972a0cfa7] Running
	I1206 20:01:17.444602  115078 system_pods.go:89] "metrics-server-57f55c9bc5-vz7qc" [97c1bcd2-eabc-4029-bb02-5bbfd4d96c0f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:17.444608  115078 system_pods.go:89] "storage-provisioner" [c4d98de3-12ec-47f6-a6a6-f1dc61b479be] Running
	I1206 20:01:17.444619  115078 system_pods.go:126] duration metric: took 6.832576ms to wait for k8s-apps to be running ...
	I1206 20:01:17.444629  115078 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 20:01:17.444687  115078 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 20:01:17.464821  115078 system_svc.go:56] duration metric: took 20.181153ms WaitForService to wait for kubelet.
	I1206 20:01:17.464866  115078 kubeadm.go:581] duration metric: took 4m24.398841426s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1206 20:01:17.464894  115078 node_conditions.go:102] verifying NodePressure condition ...
	I1206 20:01:17.467938  115078 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1206 20:01:17.467964  115078 node_conditions.go:123] node cpu capacity is 2
	I1206 20:01:17.467975  115078 node_conditions.go:105] duration metric: took 3.076458ms to run NodePressure ...
	I1206 20:01:17.467988  115078 start.go:228] waiting for startup goroutines ...
	I1206 20:01:17.467994  115078 start.go:233] waiting for cluster config update ...
	I1206 20:01:17.468004  115078 start.go:242] writing updated cluster config ...
	I1206 20:01:17.468290  115078 ssh_runner.go:195] Run: rm -f paused
	I1206 20:01:17.523451  115078 start.go:600] kubectl: 1.28.4, cluster: 1.29.0-rc.1 (minor skew: 1)
	I1206 20:01:17.525609  115078 out.go:177] * Done! kubectl is now configured to use "no-preload-989559" cluster and "default" namespace by default
	I1206 20:01:13.862479  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:13.862506  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:13.862512  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:13.862519  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:13.862523  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:13.862542  115217 retry.go:31] will retry after 1.299046526s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:15.166601  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:15.166630  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:15.166635  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:15.166642  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:15.166647  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:15.166667  115217 retry.go:31] will retry after 1.832151574s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:17.005707  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:17.005739  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:17.005746  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:17.005754  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:17.005774  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:17.005797  115217 retry.go:31] will retry after 1.796371959s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:18.808729  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:18.808757  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:18.808763  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:18.808770  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:18.808775  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:18.808792  115217 retry.go:31] will retry after 2.814845209s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:21.630762  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:21.630791  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:21.630796  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:21.630811  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:21.630816  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:21.630834  115217 retry.go:31] will retry after 2.866148194s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:24.502168  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:24.502198  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:24.502203  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:24.502211  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:24.502215  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:24.502233  115217 retry.go:31] will retry after 3.777894628s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:28.284776  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:28.284812  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:28.284818  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:28.284825  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:28.284829  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:28.284847  115217 retry.go:31] will retry after 4.837538668s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:33.127301  115217 system_pods.go:86] 4 kube-system pods found
	I1206 20:01:33.127330  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:33.127336  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:33.127344  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:33.127349  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:33.127370  115217 retry.go:31] will retry after 6.833662344s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:39.966417  115217 system_pods.go:86] 5 kube-system pods found
	I1206 20:01:39.966450  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:39.966458  115217 system_pods.go:89] "kube-apiserver-old-k8s-version-448851" [ecace4aa-bc86-43ed-9067-365504abbf70] Pending
	I1206 20:01:39.966465  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:39.966476  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:39.966483  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:39.966504  115217 retry.go:31] will retry after 9.204033337s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1206 20:01:49.176395  115217 system_pods.go:86] 8 kube-system pods found
	I1206 20:01:49.176434  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:49.176442  115217 system_pods.go:89] "etcd-old-k8s-version-448851" [91d55b2e-4361-4615-a99c-d1338c427d81] Pending
	I1206 20:01:49.176450  115217 system_pods.go:89] "kube-apiserver-old-k8s-version-448851" [ecace4aa-bc86-43ed-9067-365504abbf70] Running
	I1206 20:01:49.176457  115217 system_pods.go:89] "kube-controller-manager-old-k8s-version-448851" [cf55eb16-4a36-4d70-bb22-4cab5f9f7358] Running
	I1206 20:01:49.176462  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:49.176469  115217 system_pods.go:89] "kube-scheduler-old-k8s-version-448851" [373cb698-190a-480d-ac74-4ea990474ad1] Pending
	I1206 20:01:49.176479  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:49.176487  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:49.176511  115217 retry.go:31] will retry after 9.456016194s: missing components: etcd, kube-scheduler
	I1206 20:01:58.638807  115217 system_pods.go:86] 8 kube-system pods found
	I1206 20:01:58.638837  115217 system_pods.go:89] "coredns-5644d7b6d9-2nncf" [c6deb121-7406-4c9b-be7d-45b8b927c633] Running
	I1206 20:01:58.638842  115217 system_pods.go:89] "etcd-old-k8s-version-448851" [91d55b2e-4361-4615-a99c-d1338c427d81] Running
	I1206 20:01:58.638847  115217 system_pods.go:89] "kube-apiserver-old-k8s-version-448851" [ecace4aa-bc86-43ed-9067-365504abbf70] Running
	I1206 20:01:58.638851  115217 system_pods.go:89] "kube-controller-manager-old-k8s-version-448851" [cf55eb16-4a36-4d70-bb22-4cab5f9f7358] Running
	I1206 20:01:58.638855  115217 system_pods.go:89] "kube-proxy-wvqmw" [e8ae872e-3784-4fcc-a09c-82c56b3fcc05] Running
	I1206 20:01:58.638861  115217 system_pods.go:89] "kube-scheduler-old-k8s-version-448851" [373cb698-190a-480d-ac74-4ea990474ad1] Running
	I1206 20:01:58.638867  115217 system_pods.go:89] "metrics-server-74d5856cc6-tgtlm" [8a7743ff-40fa-4587-ae70-7517aae53c65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 20:01:58.638872  115217 system_pods.go:89] "storage-provisioner" [e6883ede-d439-42a2-93aa-a5fa9e2734c6] Running
	I1206 20:01:58.638879  115217 system_pods.go:126] duration metric: took 49.495986809s to wait for k8s-apps to be running ...
	I1206 20:01:58.638886  115217 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 20:01:58.638935  115217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 20:01:58.654683  115217 system_svc.go:56] duration metric: took 15.783018ms WaitForService to wait for kubelet.
	I1206 20:01:58.654715  115217 kubeadm.go:581] duration metric: took 58.383258338s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1206 20:01:58.654738  115217 node_conditions.go:102] verifying NodePressure condition ...
	I1206 20:01:58.659189  115217 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1206 20:01:58.659215  115217 node_conditions.go:123] node cpu capacity is 2
	I1206 20:01:58.659226  115217 node_conditions.go:105] duration metric: took 4.482979ms to run NodePressure ...
	I1206 20:01:58.659239  115217 start.go:228] waiting for startup goroutines ...
	I1206 20:01:58.659245  115217 start.go:233] waiting for cluster config update ...
	I1206 20:01:58.659255  115217 start.go:242] writing updated cluster config ...
	I1206 20:01:58.659522  115217 ssh_runner.go:195] Run: rm -f paused
	I1206 20:01:58.710716  115217 start.go:600] kubectl: 1.28.4, cluster: 1.16.0 (minor skew: 12)
	I1206 20:01:58.713372  115217 out.go:177] 
	W1206 20:01:58.714711  115217 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.16.0.
	I1206 20:01:58.716208  115217 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1206 20:01:58.717734  115217 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-448851" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-12-06 19:54:55 UTC, ends at Wed 2023-12-06 20:15:07 UTC. --
	Dec 06 20:15:06 old-k8s-version-448851 crio[712]: time="2023-12-06 20:15:06.945961836Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701893706945946704,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=061e68ef-fe50-4672-9168-773dccc3a080 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 20:15:06 old-k8s-version-448851 crio[712]: time="2023-12-06 20:15:06.952972004Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3bf7e917-c004-423c-9a4d-bfab9199c116 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:15:06 old-k8s-version-448851 crio[712]: time="2023-12-06 20:15:06.953027987Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3bf7e917-c004-423c-9a4d-bfab9199c116 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:15:06 old-k8s-version-448851 crio[712]: time="2023-12-06 20:15:06.953262280Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0268a45cb6867f331dc457f17d2b30a94d3ed6e0096e2b4f24e3cf7bcab18d7e,PodSandboxId:30ccdc4107ffbdfae1ae76b136f0631fd2be267d12e6762906b0e182cce7016d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701892863377223159,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6883ede-d439-42a2-93aa-a5fa9e2734c6,},Annotations:map[string]string{io.kubernetes.container.hash: 245502d1,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0de730d3d80f9b4ccda3a4a263a0af4eec2fc190737aea02ccac69353cf5d242,PodSandboxId:90bef1ca16b739842aa13359c92662832704dc8e2f0b166127372ed39b72cf7d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1701892862594429591,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wvqmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8ae872e-3784-4fcc-a09c-82c56b3fcc05,},Annotations:map[string]string{io.kubernetes.container.hash: f328273f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff3e0be26327f950f977a92038b6268aa4d4d147690d95151432e4212fdef94f,PodSandboxId:4ecead5f9543561f96015c444968c59eac4cb0b0fadbc1785686392f9aa7f6a5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1701892860700018447,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-2nncf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6deb121-7406-4c9b-be7d-45b8b927c633,},Annotations:map[string]string{io.kubernetes.container.hash: e8af29be,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c383b9ccb2a1831725c86c7081f7006f905d6a2056c6479970649187f93acf2,PodSandboxId:8ef2aca28417874f8b1d6f5e7846c09e7d09bdbfca9bcc1dd4d7a81ca52d8c7e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1701892836051407080,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-448851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a73d7c6d9532e36b67d907cf5d7d0492,},Annotations:map[s
tring]string{io.kubernetes.container.hash: f72f8c18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19e2a17fb2cb9ca9163abd44515140e0be53b2f15eef72c2e2e872a93d767ddd,PodSandboxId:2cc7c0d14124e247d1439e8b1dfd26e9d280ad73e50f1a577085e6157254500c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1701892834836147321,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-448851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06212ee2a32f77acb29faf0fc6feca3a8a3c3d0820299d33947df28671af3a53,PodSandboxId:a88c3a3d24e686bd69ba1ad4b03a49872a0dd7c4453d3ba719f36db9d66883d1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1701892834512278739,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-448851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a03d08bf855b9a5126a45dae7bafe41cf417a67c53c8573269c73979be322e4,PodSandboxId:070958b68242361d0e12fc2f0ba283bde3e8d48cc14fe02e1b8393153e05b8d4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1701892833805241536,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-448851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 198a53bc90d7f2fd0cd5ce4edbeef394,},Annotations:ma
p[string]string{io.kubernetes.container.hash: e7fa7d16,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46fe8c39d7ac63062240fa759515c05e9906abeb3581d184b7701d2441104a69,PodSandboxId:070958b68242361d0e12fc2f0ba283bde3e8d48cc14fe02e1b8393153e05b8d4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_EXITED,CreatedAt:1701892527069646128,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-448851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 198a53bc90d7f2fd0cd5ce4edbeef394,},Annotations:map[string]str
ing{io.kubernetes.container.hash: e7fa7d16,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3bf7e917-c004-423c-9a4d-bfab9199c116 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:15:07 old-k8s-version-448851 crio[712]: time="2023-12-06 20:15:07.002192418Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=444d7061-74e2-477f-8cd5-d88434872d4a name=/runtime.v1.RuntimeService/Version
	Dec 06 20:15:07 old-k8s-version-448851 crio[712]: time="2023-12-06 20:15:07.002286374Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=444d7061-74e2-477f-8cd5-d88434872d4a name=/runtime.v1.RuntimeService/Version
	Dec 06 20:15:07 old-k8s-version-448851 crio[712]: time="2023-12-06 20:15:07.003374201Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=7bb59514-e365-408f-a93a-42fa99b41925 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 20:15:07 old-k8s-version-448851 crio[712]: time="2023-12-06 20:15:07.003769538Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701893707003755685,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=7bb59514-e365-408f-a93a-42fa99b41925 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 20:15:07 old-k8s-version-448851 crio[712]: time="2023-12-06 20:15:07.004528248Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=8e1fc158-2aaa-4033-99d1-9baeaaab18ed name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:15:07 old-k8s-version-448851 crio[712]: time="2023-12-06 20:15:07.004573647Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=8e1fc158-2aaa-4033-99d1-9baeaaab18ed name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:15:07 old-k8s-version-448851 crio[712]: time="2023-12-06 20:15:07.005116909Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0268a45cb6867f331dc457f17d2b30a94d3ed6e0096e2b4f24e3cf7bcab18d7e,PodSandboxId:30ccdc4107ffbdfae1ae76b136f0631fd2be267d12e6762906b0e182cce7016d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701892863377223159,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6883ede-d439-42a2-93aa-a5fa9e2734c6,},Annotations:map[string]string{io.kubernetes.container.hash: 245502d1,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0de730d3d80f9b4ccda3a4a263a0af4eec2fc190737aea02ccac69353cf5d242,PodSandboxId:90bef1ca16b739842aa13359c92662832704dc8e2f0b166127372ed39b72cf7d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1701892862594429591,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wvqmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8ae872e-3784-4fcc-a09c-82c56b3fcc05,},Annotations:map[string]string{io.kubernetes.container.hash: f328273f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff3e0be26327f950f977a92038b6268aa4d4d147690d95151432e4212fdef94f,PodSandboxId:4ecead5f9543561f96015c444968c59eac4cb0b0fadbc1785686392f9aa7f6a5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1701892860700018447,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-2nncf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6deb121-7406-4c9b-be7d-45b8b927c633,},Annotations:map[string]string{io.kubernetes.container.hash: e8af29be,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c383b9ccb2a1831725c86c7081f7006f905d6a2056c6479970649187f93acf2,PodSandboxId:8ef2aca28417874f8b1d6f5e7846c09e7d09bdbfca9bcc1dd4d7a81ca52d8c7e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1701892836051407080,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-448851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a73d7c6d9532e36b67d907cf5d7d0492,},Annotations:map[s
tring]string{io.kubernetes.container.hash: f72f8c18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19e2a17fb2cb9ca9163abd44515140e0be53b2f15eef72c2e2e872a93d767ddd,PodSandboxId:2cc7c0d14124e247d1439e8b1dfd26e9d280ad73e50f1a577085e6157254500c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1701892834836147321,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-448851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06212ee2a32f77acb29faf0fc6feca3a8a3c3d0820299d33947df28671af3a53,PodSandboxId:a88c3a3d24e686bd69ba1ad4b03a49872a0dd7c4453d3ba719f36db9d66883d1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1701892834512278739,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-448851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a03d08bf855b9a5126a45dae7bafe41cf417a67c53c8573269c73979be322e4,PodSandboxId:070958b68242361d0e12fc2f0ba283bde3e8d48cc14fe02e1b8393153e05b8d4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1701892833805241536,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-448851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 198a53bc90d7f2fd0cd5ce4edbeef394,},Annotations:ma
p[string]string{io.kubernetes.container.hash: e7fa7d16,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46fe8c39d7ac63062240fa759515c05e9906abeb3581d184b7701d2441104a69,PodSandboxId:070958b68242361d0e12fc2f0ba283bde3e8d48cc14fe02e1b8393153e05b8d4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_EXITED,CreatedAt:1701892527069646128,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-448851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 198a53bc90d7f2fd0cd5ce4edbeef394,},Annotations:map[string]str
ing{io.kubernetes.container.hash: e7fa7d16,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8e1fc158-2aaa-4033-99d1-9baeaaab18ed name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:15:07 old-k8s-version-448851 crio[712]: time="2023-12-06 20:15:07.048751656Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=973d831a-9f64-432a-a8fb-283ec0d87f00 name=/runtime.v1.RuntimeService/Version
	Dec 06 20:15:07 old-k8s-version-448851 crio[712]: time="2023-12-06 20:15:07.048916224Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=973d831a-9f64-432a-a8fb-283ec0d87f00 name=/runtime.v1.RuntimeService/Version
	Dec 06 20:15:07 old-k8s-version-448851 crio[712]: time="2023-12-06 20:15:07.050452576Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=d3ced136-b9dc-483b-b428-3ec21bcaaa48 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 20:15:07 old-k8s-version-448851 crio[712]: time="2023-12-06 20:15:07.050936707Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701893707050914067,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=d3ced136-b9dc-483b-b428-3ec21bcaaa48 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 20:15:07 old-k8s-version-448851 crio[712]: time="2023-12-06 20:15:07.051568036Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=fb6457b9-2919-45a4-8f13-1660d308ddc1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:15:07 old-k8s-version-448851 crio[712]: time="2023-12-06 20:15:07.051622308Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=fb6457b9-2919-45a4-8f13-1660d308ddc1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:15:07 old-k8s-version-448851 crio[712]: time="2023-12-06 20:15:07.051893659Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0268a45cb6867f331dc457f17d2b30a94d3ed6e0096e2b4f24e3cf7bcab18d7e,PodSandboxId:30ccdc4107ffbdfae1ae76b136f0631fd2be267d12e6762906b0e182cce7016d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701892863377223159,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6883ede-d439-42a2-93aa-a5fa9e2734c6,},Annotations:map[string]string{io.kubernetes.container.hash: 245502d1,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0de730d3d80f9b4ccda3a4a263a0af4eec2fc190737aea02ccac69353cf5d242,PodSandboxId:90bef1ca16b739842aa13359c92662832704dc8e2f0b166127372ed39b72cf7d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1701892862594429591,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wvqmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8ae872e-3784-4fcc-a09c-82c56b3fcc05,},Annotations:map[string]string{io.kubernetes.container.hash: f328273f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff3e0be26327f950f977a92038b6268aa4d4d147690d95151432e4212fdef94f,PodSandboxId:4ecead5f9543561f96015c444968c59eac4cb0b0fadbc1785686392f9aa7f6a5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1701892860700018447,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-2nncf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6deb121-7406-4c9b-be7d-45b8b927c633,},Annotations:map[string]string{io.kubernetes.container.hash: e8af29be,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c383b9ccb2a1831725c86c7081f7006f905d6a2056c6479970649187f93acf2,PodSandboxId:8ef2aca28417874f8b1d6f5e7846c09e7d09bdbfca9bcc1dd4d7a81ca52d8c7e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1701892836051407080,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-448851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a73d7c6d9532e36b67d907cf5d7d0492,},Annotations:map[s
tring]string{io.kubernetes.container.hash: f72f8c18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19e2a17fb2cb9ca9163abd44515140e0be53b2f15eef72c2e2e872a93d767ddd,PodSandboxId:2cc7c0d14124e247d1439e8b1dfd26e9d280ad73e50f1a577085e6157254500c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1701892834836147321,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-448851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06212ee2a32f77acb29faf0fc6feca3a8a3c3d0820299d33947df28671af3a53,PodSandboxId:a88c3a3d24e686bd69ba1ad4b03a49872a0dd7c4453d3ba719f36db9d66883d1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1701892834512278739,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-448851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a03d08bf855b9a5126a45dae7bafe41cf417a67c53c8573269c73979be322e4,PodSandboxId:070958b68242361d0e12fc2f0ba283bde3e8d48cc14fe02e1b8393153e05b8d4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1701892833805241536,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-448851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 198a53bc90d7f2fd0cd5ce4edbeef394,},Annotations:ma
p[string]string{io.kubernetes.container.hash: e7fa7d16,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46fe8c39d7ac63062240fa759515c05e9906abeb3581d184b7701d2441104a69,PodSandboxId:070958b68242361d0e12fc2f0ba283bde3e8d48cc14fe02e1b8393153e05b8d4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_EXITED,CreatedAt:1701892527069646128,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-448851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 198a53bc90d7f2fd0cd5ce4edbeef394,},Annotations:map[string]str
ing{io.kubernetes.container.hash: e7fa7d16,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=fb6457b9-2919-45a4-8f13-1660d308ddc1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:15:07 old-k8s-version-448851 crio[712]: time="2023-12-06 20:15:07.089920661Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=d116cbba-ff49-415f-b22a-f7d09e4c46e1 name=/runtime.v1.RuntimeService/Version
	Dec 06 20:15:07 old-k8s-version-448851 crio[712]: time="2023-12-06 20:15:07.090002583Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=d116cbba-ff49-415f-b22a-f7d09e4c46e1 name=/runtime.v1.RuntimeService/Version
	Dec 06 20:15:07 old-k8s-version-448851 crio[712]: time="2023-12-06 20:15:07.091379725Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=6ca1b988-a7d3-41f7-93ed-e8184c7d63f6 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 20:15:07 old-k8s-version-448851 crio[712]: time="2023-12-06 20:15:07.091789137Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701893707091773767,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=6ca1b988-a7d3-41f7-93ed-e8184c7d63f6 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 20:15:07 old-k8s-version-448851 crio[712]: time="2023-12-06 20:15:07.096171797Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=52034604-ee35-414a-a617-ef55f687ad73 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:15:07 old-k8s-version-448851 crio[712]: time="2023-12-06 20:15:07.098082767Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=52034604-ee35-414a-a617-ef55f687ad73 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 20:15:07 old-k8s-version-448851 crio[712]: time="2023-12-06 20:15:07.098311670Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0268a45cb6867f331dc457f17d2b30a94d3ed6e0096e2b4f24e3cf7bcab18d7e,PodSandboxId:30ccdc4107ffbdfae1ae76b136f0631fd2be267d12e6762906b0e182cce7016d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701892863377223159,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6883ede-d439-42a2-93aa-a5fa9e2734c6,},Annotations:map[string]string{io.kubernetes.container.hash: 245502d1,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0de730d3d80f9b4ccda3a4a263a0af4eec2fc190737aea02ccac69353cf5d242,PodSandboxId:90bef1ca16b739842aa13359c92662832704dc8e2f0b166127372ed39b72cf7d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1701892862594429591,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wvqmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8ae872e-3784-4fcc-a09c-82c56b3fcc05,},Annotations:map[string]string{io.kubernetes.container.hash: f328273f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff3e0be26327f950f977a92038b6268aa4d4d147690d95151432e4212fdef94f,PodSandboxId:4ecead5f9543561f96015c444968c59eac4cb0b0fadbc1785686392f9aa7f6a5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1701892860700018447,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-2nncf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6deb121-7406-4c9b-be7d-45b8b927c633,},Annotations:map[string]string{io.kubernetes.container.hash: e8af29be,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c383b9ccb2a1831725c86c7081f7006f905d6a2056c6479970649187f93acf2,PodSandboxId:8ef2aca28417874f8b1d6f5e7846c09e7d09bdbfca9bcc1dd4d7a81ca52d8c7e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1701892836051407080,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-448851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a73d7c6d9532e36b67d907cf5d7d0492,},Annotations:map[s
tring]string{io.kubernetes.container.hash: f72f8c18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19e2a17fb2cb9ca9163abd44515140e0be53b2f15eef72c2e2e872a93d767ddd,PodSandboxId:2cc7c0d14124e247d1439e8b1dfd26e9d280ad73e50f1a577085e6157254500c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1701892834836147321,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-448851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06212ee2a32f77acb29faf0fc6feca3a8a3c3d0820299d33947df28671af3a53,PodSandboxId:a88c3a3d24e686bd69ba1ad4b03a49872a0dd7c4453d3ba719f36db9d66883d1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1701892834512278739,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-448851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a03d08bf855b9a5126a45dae7bafe41cf417a67c53c8573269c73979be322e4,PodSandboxId:070958b68242361d0e12fc2f0ba283bde3e8d48cc14fe02e1b8393153e05b8d4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1701892833805241536,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-448851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 198a53bc90d7f2fd0cd5ce4edbeef394,},Annotations:ma
p[string]string{io.kubernetes.container.hash: e7fa7d16,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46fe8c39d7ac63062240fa759515c05e9906abeb3581d184b7701d2441104a69,PodSandboxId:070958b68242361d0e12fc2f0ba283bde3e8d48cc14fe02e1b8393153e05b8d4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_EXITED,CreatedAt:1701892527069646128,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-448851,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 198a53bc90d7f2fd0cd5ce4edbeef394,},Annotations:map[string]str
ing{io.kubernetes.container.hash: e7fa7d16,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=52034604-ee35-414a-a617-ef55f687ad73 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0268a45cb6867       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   30ccdc4107ffb       storage-provisioner
	0de730d3d80f9       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384   14 minutes ago      Running             kube-proxy                0                   90bef1ca16b73       kube-proxy-wvqmw
	ff3e0be26327f       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b   14 minutes ago      Running             coredns                   0                   4ecead5f95435       coredns-5644d7b6d9-2nncf
	0c383b9ccb2a1       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed   14 minutes ago      Running             etcd                      0                   8ef2aca284178       etcd-old-k8s-version-448851
	19e2a17fb2cb9       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a   14 minutes ago      Running             kube-scheduler            0                   2cc7c0d14124e       kube-scheduler-old-k8s-version-448851
	06212ee2a32f7       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d   14 minutes ago      Running             kube-controller-manager   0                   a88c3a3d24e68       kube-controller-manager-old-k8s-version-448851
	4a03d08bf855b       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   14 minutes ago      Running             kube-apiserver            1                   070958b682423       kube-apiserver-old-k8s-version-448851
	46fe8c39d7ac6       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   19 minutes ago      Exited              kube-apiserver            0                   070958b682423       kube-apiserver-old-k8s-version-448851
	
	* 
	* ==> coredns [ff3e0be26327f950f977a92038b6268aa4d4d147690d95151432e4212fdef94f] <==
	* .:53
	2023-12-06T20:01:01.619Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
	2023-12-06T20:01:01.648Z [INFO] CoreDNS-1.6.2
	2023-12-06T20:01:01.648Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	[INFO] Reloading
	2023-12-06T20:01:39.076Z [INFO] plugin/reload: Running configuration MD5 = 7bc8613a521eb1a1737fc3e7c0fea3ca
	[INFO] Reloading complete
	2023-12-06T20:01:39.105Z [INFO] 127.0.0.1:50903 - 46455 "HINFO IN 7909166905492929656.2414890882460254701. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.029157112s
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-448851
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-448851
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=31a3600ce72029d920a55140bbc6d0705e357503
	                    minikube.k8s.io/name=old-k8s-version-448851
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_06T20_00_45_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 06 Dec 2023 20:00:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 06 Dec 2023 20:14:40 +0000   Wed, 06 Dec 2023 20:00:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 06 Dec 2023 20:14:40 +0000   Wed, 06 Dec 2023 20:00:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 06 Dec 2023 20:14:40 +0000   Wed, 06 Dec 2023 20:00:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 06 Dec 2023 20:14:40 +0000   Wed, 06 Dec 2023 20:00:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.33
	  Hostname:    old-k8s-version-448851
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 aa71c7e30b1142b693698088426cb1d6
	 System UUID:                aa71c7e3-0b11-42b6-9369-8088426cb1d6
	 Boot ID:                    329ce5de-4216-4673-8fb1-de5942212a26
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (8 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-2nncf                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                etcd-old-k8s-version-448851                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                kube-apiserver-old-k8s-version-448851             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                kube-controller-manager-old-k8s-version-448851    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                kube-proxy-wvqmw                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                kube-scheduler-old-k8s-version-448851             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                metrics-server-74d5856cc6-tgtlm                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                Message
	  ----    ------                   ----               ----                                -------
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet, old-k8s-version-448851     Node old-k8s-version-448851 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x7 over 14m)  kubelet, old-k8s-version-448851     Node old-k8s-version-448851 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x8 over 14m)  kubelet, old-k8s-version-448851     Node old-k8s-version-448851 status is now: NodeHasSufficientPID
	  Normal  Starting                 14m                kube-proxy, old-k8s-version-448851  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Dec 6 19:54] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.067492] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.360665] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.465418] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.149930] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.509408] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Dec 6 19:55] systemd-fstab-generator[637]: Ignoring "noauto" for root device
	[  +0.103328] systemd-fstab-generator[648]: Ignoring "noauto" for root device
	[  +0.142685] systemd-fstab-generator[661]: Ignoring "noauto" for root device
	[  +0.114670] systemd-fstab-generator[672]: Ignoring "noauto" for root device
	[  +0.230580] systemd-fstab-generator[696]: Ignoring "noauto" for root device
	[ +19.942163] systemd-fstab-generator[1027]: Ignoring "noauto" for root device
	[  +0.598795] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[ +25.635722] kauditd_printk_skb: 13 callbacks suppressed
	[Dec 6 19:56] kauditd_printk_skb: 4 callbacks suppressed
	[Dec 6 20:00] systemd-fstab-generator[3089]: Ignoring "noauto" for root device
	[  +1.460595] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 6 20:01] kauditd_printk_skb: 11 callbacks suppressed
	
	* 
	* ==> etcd [0c383b9ccb2a1831725c86c7081f7006f905d6a2056c6479970649187f93acf2] <==
	* 2023-12-06 20:00:36.185641 I | raft: newRaft 8213be6a1edaaef2 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2023-12-06 20:00:36.185665 I | raft: 8213be6a1edaaef2 became follower at term 1
	2023-12-06 20:00:36.195363 W | auth: simple token is not cryptographically signed
	2023-12-06 20:00:36.201092 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2023-12-06 20:00:36.202358 I | etcdserver: 8213be6a1edaaef2 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-12-06 20:00:36.202918 I | etcdserver/membership: added member 8213be6a1edaaef2 [https://192.168.61.33:2380] to cluster 57e911bf31e05932
	2023-12-06 20:00:36.204434 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-12-06 20:00:36.204700 I | embed: listening for metrics on http://192.168.61.33:2381
	2023-12-06 20:00:36.204788 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-12-06 20:00:36.286287 I | raft: 8213be6a1edaaef2 is starting a new election at term 1
	2023-12-06 20:00:36.286375 I | raft: 8213be6a1edaaef2 became candidate at term 2
	2023-12-06 20:00:36.286400 I | raft: 8213be6a1edaaef2 received MsgVoteResp from 8213be6a1edaaef2 at term 2
	2023-12-06 20:00:36.286454 I | raft: 8213be6a1edaaef2 became leader at term 2
	2023-12-06 20:00:36.286486 I | raft: raft.node: 8213be6a1edaaef2 elected leader 8213be6a1edaaef2 at term 2
	2023-12-06 20:00:36.286787 I | etcdserver: setting up the initial cluster version to 3.3
	2023-12-06 20:00:36.288305 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-12-06 20:00:36.288381 I | etcdserver/api: enabled capabilities for version 3.3
	2023-12-06 20:00:36.288407 I | etcdserver: published {Name:old-k8s-version-448851 ClientURLs:[https://192.168.61.33:2379]} to cluster 57e911bf31e05932
	2023-12-06 20:00:36.288423 I | embed: ready to serve client requests
	2023-12-06 20:00:36.288968 I | embed: ready to serve client requests
	2023-12-06 20:00:36.289770 I | embed: serving client requests on 192.168.61.33:2379
	2023-12-06 20:00:36.292056 I | embed: serving client requests on 127.0.0.1:2379
	2023-12-06 20:01:01.307261 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-5644d7b6d9-2nncf\" " with result "range_response_count:1 size:1694" took too long (457.280095ms) to execute
	2023-12-06 20:10:36.905649 I | mvcc: store.index: compact 669
	2023-12-06 20:10:36.907736 I | mvcc: finished scheduled compaction at 669 (took 1.554577ms)
	
	* 
	* ==> kernel <==
	*  20:15:07 up 20 min,  0 users,  load average: 0.14, 0.14, 0.22
	Linux old-k8s-version-448851 5.10.57 #1 SMP Fri Dec 1 04:24:04 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [46fe8c39d7ac63062240fa759515c05e9906abeb3581d184b7701d2441104a69] <==
	* W1206 20:00:31.063987       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1206 20:00:31.065884       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1206 20:00:31.075503       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1206 20:00:31.091137       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1206 20:00:31.119253       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1206 20:00:31.119991       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1206 20:00:31.137497       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1206 20:00:31.139043       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1206 20:00:31.149457       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1206 20:00:31.151409       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1206 20:00:31.178915       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1206 20:00:31.191732       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1206 20:00:31.203659       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1206 20:00:31.220000       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1206 20:00:31.229563       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1206 20:00:31.233321       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1206 20:00:31.236734       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1206 20:00:31.238091       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1206 20:00:31.261303       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1206 20:00:31.268956       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1206 20:00:31.272926       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1206 20:00:31.285213       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1206 20:00:31.301365       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1206 20:00:31.302007       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1206 20:00:31.306264       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	* 
	* ==> kube-apiserver [4a03d08bf855b9a5126a45dae7bafe41cf417a67c53c8573269c73979be322e4] <==
	* I1206 20:06:41.222238       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1206 20:06:41.222531       1 handler_proxy.go:99] no RequestInfo found in the context
	E1206 20:06:41.222676       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1206 20:06:41.222689       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1206 20:08:41.223273       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1206 20:08:41.223423       1 handler_proxy.go:99] no RequestInfo found in the context
	E1206 20:08:41.223478       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1206 20:08:41.223495       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1206 20:10:41.226268       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1206 20:10:41.226714       1 handler_proxy.go:99] no RequestInfo found in the context
	E1206 20:10:41.226939       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1206 20:10:41.226990       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1206 20:11:41.227289       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1206 20:11:41.227745       1 handler_proxy.go:99] no RequestInfo found in the context
	E1206 20:11:41.227896       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1206 20:11:41.227929       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1206 20:13:41.228387       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1206 20:13:41.228696       1 handler_proxy.go:99] no RequestInfo found in the context
	E1206 20:13:41.228798       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1206 20:13:41.228965       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [06212ee2a32f77acb29faf0fc6feca3a8a3c3d0820299d33947df28671af3a53] <==
	* W1206 20:09:00.912664       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1206 20:09:04.169028       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1206 20:09:32.914893       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1206 20:09:34.421787       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E1206 20:10:04.674156       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1206 20:10:04.917979       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1206 20:10:34.925949       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1206 20:10:36.920247       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1206 20:11:05.178443       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1206 20:11:08.922325       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1206 20:11:35.430862       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1206 20:11:40.924683       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1206 20:12:05.683242       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1206 20:12:12.926591       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1206 20:12:35.935452       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1206 20:12:44.928709       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1206 20:13:06.187662       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1206 20:13:16.931098       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1206 20:13:36.440271       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1206 20:13:48.933654       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1206 20:14:06.692158       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1206 20:14:20.935615       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1206 20:14:36.944745       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1206 20:14:52.938132       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1206 20:15:07.198547       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	* 
	* ==> kube-proxy [0de730d3d80f9b4ccda3a4a263a0af4eec2fc190737aea02ccac69353cf5d242] <==
	* W1206 20:01:03.033490       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I1206 20:01:03.047080       1 node.go:135] Successfully retrieved node IP: 192.168.61.33
	I1206 20:01:03.047237       1 server_others.go:149] Using iptables Proxier.
	I1206 20:01:03.048296       1 server.go:529] Version: v1.16.0
	I1206 20:01:03.051280       1 config.go:131] Starting endpoints config controller
	I1206 20:01:03.052588       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I1206 20:01:03.058026       1 config.go:313] Starting service config controller
	I1206 20:01:03.058163       1 shared_informer.go:197] Waiting for caches to sync for service config
	I1206 20:01:03.156319       1 shared_informer.go:204] Caches are synced for endpoints config 
	I1206 20:01:03.159164       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [19e2a17fb2cb9ca9163abd44515140e0be53b2f15eef72c2e2e872a93d767ddd] <==
	* W1206 20:00:40.275718       1 authentication.go:79] Authentication is disabled
	I1206 20:00:40.275737       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I1206 20:00:40.276338       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E1206 20:00:40.332312       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1206 20:00:40.332531       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1206 20:00:40.345371       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1206 20:00:40.345488       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1206 20:00:40.345601       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1206 20:00:40.345950       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1206 20:00:40.346232       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1206 20:00:40.346363       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1206 20:00:40.346370       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1206 20:00:40.347131       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1206 20:00:40.348993       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1206 20:00:41.336311       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1206 20:00:41.357099       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1206 20:00:41.357705       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1206 20:00:41.359217       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1206 20:00:41.360931       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1206 20:00:41.361007       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1206 20:00:41.361051       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1206 20:00:41.361116       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1206 20:00:41.361146       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1206 20:00:41.361390       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1206 20:00:41.361924       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-12-06 19:54:55 UTC, ends at Wed 2023-12-06 20:15:07 UTC. --
	Dec 06 20:10:33 old-k8s-version-448851 kubelet[3106]: E1206 20:10:33.395087    3106 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Dec 06 20:10:41 old-k8s-version-448851 kubelet[3106]: E1206 20:10:41.308242    3106 pod_workers.go:191] Error syncing pod 8a7743ff-40fa-4587-ae70-7517aae53c65 ("metrics-server-74d5856cc6-tgtlm_kube-system(8a7743ff-40fa-4587-ae70-7517aae53c65)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 06 20:10:52 old-k8s-version-448851 kubelet[3106]: E1206 20:10:52.307641    3106 pod_workers.go:191] Error syncing pod 8a7743ff-40fa-4587-ae70-7517aae53c65 ("metrics-server-74d5856cc6-tgtlm_kube-system(8a7743ff-40fa-4587-ae70-7517aae53c65)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 06 20:11:07 old-k8s-version-448851 kubelet[3106]: E1206 20:11:07.308340    3106 pod_workers.go:191] Error syncing pod 8a7743ff-40fa-4587-ae70-7517aae53c65 ("metrics-server-74d5856cc6-tgtlm_kube-system(8a7743ff-40fa-4587-ae70-7517aae53c65)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 06 20:11:18 old-k8s-version-448851 kubelet[3106]: E1206 20:11:18.307766    3106 pod_workers.go:191] Error syncing pod 8a7743ff-40fa-4587-ae70-7517aae53c65 ("metrics-server-74d5856cc6-tgtlm_kube-system(8a7743ff-40fa-4587-ae70-7517aae53c65)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 06 20:11:33 old-k8s-version-448851 kubelet[3106]: E1206 20:11:33.307483    3106 pod_workers.go:191] Error syncing pod 8a7743ff-40fa-4587-ae70-7517aae53c65 ("metrics-server-74d5856cc6-tgtlm_kube-system(8a7743ff-40fa-4587-ae70-7517aae53c65)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 06 20:11:44 old-k8s-version-448851 kubelet[3106]: E1206 20:11:44.307594    3106 pod_workers.go:191] Error syncing pod 8a7743ff-40fa-4587-ae70-7517aae53c65 ("metrics-server-74d5856cc6-tgtlm_kube-system(8a7743ff-40fa-4587-ae70-7517aae53c65)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 06 20:11:55 old-k8s-version-448851 kubelet[3106]: E1206 20:11:55.308630    3106 pod_workers.go:191] Error syncing pod 8a7743ff-40fa-4587-ae70-7517aae53c65 ("metrics-server-74d5856cc6-tgtlm_kube-system(8a7743ff-40fa-4587-ae70-7517aae53c65)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 06 20:12:09 old-k8s-version-448851 kubelet[3106]: E1206 20:12:09.319269    3106 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Dec 06 20:12:09 old-k8s-version-448851 kubelet[3106]: E1206 20:12:09.319343    3106 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Dec 06 20:12:09 old-k8s-version-448851 kubelet[3106]: E1206 20:12:09.319387    3106 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Dec 06 20:12:09 old-k8s-version-448851 kubelet[3106]: E1206 20:12:09.319415    3106 pod_workers.go:191] Error syncing pod 8a7743ff-40fa-4587-ae70-7517aae53c65 ("metrics-server-74d5856cc6-tgtlm_kube-system(8a7743ff-40fa-4587-ae70-7517aae53c65)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Dec 06 20:12:20 old-k8s-version-448851 kubelet[3106]: E1206 20:12:20.307757    3106 pod_workers.go:191] Error syncing pod 8a7743ff-40fa-4587-ae70-7517aae53c65 ("metrics-server-74d5856cc6-tgtlm_kube-system(8a7743ff-40fa-4587-ae70-7517aae53c65)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 06 20:12:31 old-k8s-version-448851 kubelet[3106]: E1206 20:12:31.307797    3106 pod_workers.go:191] Error syncing pod 8a7743ff-40fa-4587-ae70-7517aae53c65 ("metrics-server-74d5856cc6-tgtlm_kube-system(8a7743ff-40fa-4587-ae70-7517aae53c65)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 06 20:12:43 old-k8s-version-448851 kubelet[3106]: E1206 20:12:43.308178    3106 pod_workers.go:191] Error syncing pod 8a7743ff-40fa-4587-ae70-7517aae53c65 ("metrics-server-74d5856cc6-tgtlm_kube-system(8a7743ff-40fa-4587-ae70-7517aae53c65)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 06 20:12:55 old-k8s-version-448851 kubelet[3106]: E1206 20:12:55.307550    3106 pod_workers.go:191] Error syncing pod 8a7743ff-40fa-4587-ae70-7517aae53c65 ("metrics-server-74d5856cc6-tgtlm_kube-system(8a7743ff-40fa-4587-ae70-7517aae53c65)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 06 20:13:09 old-k8s-version-448851 kubelet[3106]: E1206 20:13:09.307333    3106 pod_workers.go:191] Error syncing pod 8a7743ff-40fa-4587-ae70-7517aae53c65 ("metrics-server-74d5856cc6-tgtlm_kube-system(8a7743ff-40fa-4587-ae70-7517aae53c65)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 06 20:13:23 old-k8s-version-448851 kubelet[3106]: E1206 20:13:23.307874    3106 pod_workers.go:191] Error syncing pod 8a7743ff-40fa-4587-ae70-7517aae53c65 ("metrics-server-74d5856cc6-tgtlm_kube-system(8a7743ff-40fa-4587-ae70-7517aae53c65)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 06 20:13:37 old-k8s-version-448851 kubelet[3106]: E1206 20:13:37.307334    3106 pod_workers.go:191] Error syncing pod 8a7743ff-40fa-4587-ae70-7517aae53c65 ("metrics-server-74d5856cc6-tgtlm_kube-system(8a7743ff-40fa-4587-ae70-7517aae53c65)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 06 20:13:48 old-k8s-version-448851 kubelet[3106]: E1206 20:13:48.307507    3106 pod_workers.go:191] Error syncing pod 8a7743ff-40fa-4587-ae70-7517aae53c65 ("metrics-server-74d5856cc6-tgtlm_kube-system(8a7743ff-40fa-4587-ae70-7517aae53c65)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 06 20:14:03 old-k8s-version-448851 kubelet[3106]: E1206 20:14:03.307681    3106 pod_workers.go:191] Error syncing pod 8a7743ff-40fa-4587-ae70-7517aae53c65 ("metrics-server-74d5856cc6-tgtlm_kube-system(8a7743ff-40fa-4587-ae70-7517aae53c65)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 06 20:14:15 old-k8s-version-448851 kubelet[3106]: E1206 20:14:15.308398    3106 pod_workers.go:191] Error syncing pod 8a7743ff-40fa-4587-ae70-7517aae53c65 ("metrics-server-74d5856cc6-tgtlm_kube-system(8a7743ff-40fa-4587-ae70-7517aae53c65)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 06 20:14:26 old-k8s-version-448851 kubelet[3106]: E1206 20:14:26.307663    3106 pod_workers.go:191] Error syncing pod 8a7743ff-40fa-4587-ae70-7517aae53c65 ("metrics-server-74d5856cc6-tgtlm_kube-system(8a7743ff-40fa-4587-ae70-7517aae53c65)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 06 20:14:41 old-k8s-version-448851 kubelet[3106]: E1206 20:14:41.307649    3106 pod_workers.go:191] Error syncing pod 8a7743ff-40fa-4587-ae70-7517aae53c65 ("metrics-server-74d5856cc6-tgtlm_kube-system(8a7743ff-40fa-4587-ae70-7517aae53c65)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 06 20:14:56 old-k8s-version-448851 kubelet[3106]: E1206 20:14:56.307704    3106 pod_workers.go:191] Error syncing pod 8a7743ff-40fa-4587-ae70-7517aae53c65 ("metrics-server-74d5856cc6-tgtlm_kube-system(8a7743ff-40fa-4587-ae70-7517aae53c65)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	* 
	* ==> storage-provisioner [0268a45cb6867f331dc457f17d2b30a94d3ed6e0096e2b4f24e3cf7bcab18d7e] <==
	* I1206 20:01:03.529298       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1206 20:01:03.546125       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1206 20:01:03.546257       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1206 20:01:03.565940       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1206 20:01:03.566286       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-448851_54eb4b2d-3290-45ce-b3f4-ff1907c8baa1!
	I1206 20:01:03.572036       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5ef07ee0-ed24-473c-aea3-e7b6e1797ad9", APIVersion:"v1", ResourceVersion:"422", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-448851_54eb4b2d-3290-45ce-b3f4-ff1907c8baa1 became leader
	I1206 20:01:03.667442       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-448851_54eb4b2d-3290-45ce-b3f4-ff1907c8baa1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-448851 -n old-k8s-version-448851
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-448851 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-tgtlm
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-448851 describe pod metrics-server-74d5856cc6-tgtlm
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-448851 describe pod metrics-server-74d5856cc6-tgtlm: exit status 1 (74.185451ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-tgtlm" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-448851 describe pod metrics-server-74d5856cc6-tgtlm: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (246.47s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (140.55s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-347168 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p newest-cni-347168 --alsologtostderr -v=3: exit status 82 (2m1.908762414s)

                                                
                                                
-- stdout --
	* Stopping node "newest-cni-347168"  ...
	* Stopping node "newest-cni-347168"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 20:16:10.799083  121594 out.go:296] Setting OutFile to fd 1 ...
	I1206 20:16:10.799293  121594 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 20:16:10.799305  121594 out.go:309] Setting ErrFile to fd 2...
	I1206 20:16:10.799310  121594 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 20:16:10.799514  121594 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17740-63652/.minikube/bin
	I1206 20:16:10.799808  121594 out.go:303] Setting JSON to false
	I1206 20:16:10.799919  121594 mustload.go:65] Loading cluster: newest-cni-347168
	I1206 20:16:10.800355  121594 config.go:182] Loaded profile config "newest-cni-347168": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.1
	I1206 20:16:10.800472  121594 profile.go:148] Saving config to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/newest-cni-347168/config.json ...
	I1206 20:16:10.800678  121594 mustload.go:65] Loading cluster: newest-cni-347168
	I1206 20:16:10.800820  121594 config.go:182] Loaded profile config "newest-cni-347168": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.1
	I1206 20:16:10.800874  121594 stop.go:39] StopHost: newest-cni-347168
	I1206 20:16:10.801533  121594 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:16:10.801590  121594 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:16:10.817634  121594 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42303
	I1206 20:16:10.818149  121594 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:16:10.818968  121594 main.go:141] libmachine: Using API Version  1
	I1206 20:16:10.819008  121594 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:16:10.819423  121594 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:16:10.821887  121594 out.go:177] * Stopping node "newest-cni-347168"  ...
	I1206 20:16:10.823278  121594 main.go:141] libmachine: Stopping "newest-cni-347168"...
	I1206 20:16:10.823307  121594 main.go:141] libmachine: (newest-cni-347168) Calling .GetState
	I1206 20:16:10.825491  121594 main.go:141] libmachine: (newest-cni-347168) Calling .Stop
	I1206 20:16:10.829350  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 0/60
	I1206 20:16:11.831733  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 1/60
	I1206 20:16:12.833011  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 2/60
	I1206 20:16:13.835079  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 3/60
	I1206 20:16:14.836622  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 4/60
	I1206 20:16:15.838866  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 5/60
	I1206 20:16:16.840743  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 6/60
	I1206 20:16:17.842143  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 7/60
	I1206 20:16:18.844422  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 8/60
	I1206 20:16:19.846167  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 9/60
	I1206 20:16:21.181367  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 10/60
	I1206 20:16:22.183055  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 11/60
	I1206 20:16:23.184536  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 12/60
	I1206 20:16:24.186219  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 13/60
	I1206 20:16:25.188393  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 14/60
	I1206 20:16:26.190617  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 15/60
	I1206 20:16:27.192138  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 16/60
	I1206 20:16:28.193534  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 17/60
	I1206 20:16:29.195206  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 18/60
	I1206 20:16:30.196550  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 19/60
	I1206 20:16:31.198650  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 20/60
	I1206 20:16:32.200243  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 21/60
	I1206 20:16:33.201595  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 22/60
	I1206 20:16:34.203066  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 23/60
	I1206 20:16:35.204603  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 24/60
	I1206 20:16:36.206558  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 25/60
	I1206 20:16:37.208111  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 26/60
	I1206 20:16:38.209691  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 27/60
	I1206 20:16:39.211221  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 28/60
	I1206 20:16:40.212810  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 29/60
	I1206 20:16:41.214122  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 30/60
	I1206 20:16:42.215766  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 31/60
	I1206 20:16:43.217070  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 32/60
	I1206 20:16:44.218421  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 33/60
	I1206 20:16:45.219689  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 34/60
	I1206 20:16:46.222124  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 35/60
	I1206 20:16:47.223519  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 36/60
	I1206 20:16:48.224875  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 37/60
	I1206 20:16:49.226259  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 38/60
	I1206 20:16:50.227682  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 39/60
	I1206 20:16:51.228996  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 40/60
	I1206 20:16:52.230625  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 41/60
	I1206 20:16:53.232194  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 42/60
	I1206 20:16:54.233517  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 43/60
	I1206 20:16:55.235137  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 44/60
	I1206 20:16:56.237182  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 45/60
	I1206 20:16:57.238646  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 46/60
	I1206 20:16:58.240355  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 47/60
	I1206 20:16:59.241654  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 48/60
	I1206 20:17:00.243703  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 49/60
	I1206 20:17:01.245689  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 50/60
	I1206 20:17:02.773919  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 51/60
	I1206 20:17:03.775466  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 52/60
	I1206 20:17:04.776730  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 53/60
	I1206 20:17:05.778147  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 54/60
	I1206 20:17:06.780162  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 55/60
	I1206 20:17:07.781554  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 56/60
	I1206 20:17:08.783734  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 57/60
	I1206 20:17:09.784982  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 58/60
	I1206 20:17:10.786388  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 59/60
	I1206 20:17:11.787484  121594 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1206 20:17:11.787546  121594 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1206 20:17:11.787566  121594 retry.go:31] will retry after 726.73824ms: Temporary Error: stop: unable to stop vm, current state "Running"
	I1206 20:17:12.514482  121594 stop.go:39] StopHost: newest-cni-347168
	I1206 20:17:12.514940  121594 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 20:17:12.515077  121594 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 20:17:12.529276  121594 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38487
	I1206 20:17:12.529692  121594 main.go:141] libmachine: () Calling .GetVersion
	I1206 20:17:12.530205  121594 main.go:141] libmachine: Using API Version  1
	I1206 20:17:12.530236  121594 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 20:17:12.530575  121594 main.go:141] libmachine: () Calling .GetMachineName
	I1206 20:17:12.532975  121594 out.go:177] * Stopping node "newest-cni-347168"  ...
	I1206 20:17:12.534572  121594 main.go:141] libmachine: Stopping "newest-cni-347168"...
	I1206 20:17:12.534589  121594 main.go:141] libmachine: (newest-cni-347168) Calling .GetState
	I1206 20:17:12.536034  121594 main.go:141] libmachine: (newest-cni-347168) Calling .Stop
	I1206 20:17:12.539452  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 0/60
	I1206 20:17:13.540820  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 1/60
	I1206 20:17:14.542498  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 2/60
	I1206 20:17:15.543752  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 3/60
	I1206 20:17:16.545317  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 4/60
	I1206 20:17:17.547313  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 5/60
	I1206 20:17:18.548686  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 6/60
	I1206 20:17:19.550081  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 7/60
	I1206 20:17:20.551360  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 8/60
	I1206 20:17:21.552825  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 9/60
	I1206 20:17:22.554693  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 10/60
	I1206 20:17:23.556164  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 11/60
	I1206 20:17:24.557602  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 12/60
	I1206 20:17:25.559109  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 13/60
	I1206 20:17:26.560437  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 14/60
	I1206 20:17:27.562412  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 15/60
	I1206 20:17:28.563805  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 16/60
	I1206 20:17:29.565125  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 17/60
	I1206 20:17:30.566504  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 18/60
	I1206 20:17:31.567732  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 19/60
	I1206 20:17:32.569592  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 20/60
	I1206 20:17:33.571178  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 21/60
	I1206 20:17:34.572437  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 22/60
	I1206 20:17:35.573933  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 23/60
	I1206 20:17:36.575130  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 24/60
	I1206 20:17:37.576973  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 25/60
	I1206 20:17:38.578371  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 26/60
	I1206 20:17:39.579885  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 27/60
	I1206 20:17:40.581418  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 28/60
	I1206 20:17:41.583860  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 29/60
	I1206 20:17:42.585446  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 30/60
	I1206 20:17:43.586990  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 31/60
	I1206 20:17:44.588285  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 32/60
	I1206 20:17:45.589841  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 33/60
	I1206 20:17:46.591155  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 34/60
	I1206 20:17:47.592919  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 35/60
	I1206 20:17:48.594383  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 36/60
	I1206 20:17:49.595734  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 37/60
	I1206 20:17:50.597283  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 38/60
	I1206 20:17:51.598624  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 39/60
	I1206 20:17:52.600426  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 40/60
	I1206 20:17:53.601922  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 41/60
	I1206 20:17:54.603630  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 42/60
	I1206 20:17:55.605160  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 43/60
	I1206 20:17:56.606661  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 44/60
	I1206 20:17:57.608536  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 45/60
	I1206 20:17:58.609855  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 46/60
	I1206 20:17:59.611601  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 47/60
	I1206 20:18:00.613010  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 48/60
	I1206 20:18:01.614505  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 49/60
	I1206 20:18:02.616470  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 50/60
	I1206 20:18:03.617838  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 51/60
	I1206 20:18:04.619303  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 52/60
	I1206 20:18:05.620609  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 53/60
	I1206 20:18:06.622030  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 54/60
	I1206 20:18:07.623960  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 55/60
	I1206 20:18:08.625448  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 56/60
	I1206 20:18:09.626893  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 57/60
	I1206 20:18:10.628497  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 58/60
	I1206 20:18:11.630126  121594 main.go:141] libmachine: (newest-cni-347168) Waiting for machine to stop 59/60
	I1206 20:18:12.631247  121594 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1206 20:18:12.631295  121594 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1206 20:18:12.633502  121594 out.go:177] 
	W1206 20:18:12.635123  121594 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1206 20:18:12.635137  121594 out.go:239] * 
	* 
	W1206 20:18:12.638567  121594 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 20:18:12.640198  121594 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p newest-cni-347168 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-347168 -n newest-cni-347168
E1206 20:18:14.591060   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/default-k8s-diff-port-380424/client.crt: no such file or directory
E1206 20:18:15.223174   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/old-k8s-version-448851/client.crt: no such file or directory
E1206 20:18:22.657517   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/addons-463584/client.crt: no such file or directory
E1206 20:18:24.831785   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/default-k8s-diff-port-380424/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-347168 -n newest-cni-347168: exit status 3 (18.63592694s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 20:18:31.277626  122775 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.192:22: connect: no route to host
	E1206 20:18:31.277658  122775 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.192:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "newest-cni-347168" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/newest-cni/serial/Stop (140.55s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-347168 -n newest-cni-347168
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-347168 -n newest-cni-347168: exit status 3 (3.199570087s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 20:18:34.477613  122851 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.192:22: connect: no route to host
	E1206 20:18:34.477638  122851 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.192:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-347168 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p newest-cni-347168 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153557015s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.192:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p newest-cni-347168 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-347168 -n newest-cni-347168
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-347168 -n newest-cni-347168: exit status 3 (3.062645397s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 20:18:43.693665  122911 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.192:22: connect: no route to host
	E1206 20:18:43.693693  122911 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.192:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "newest-cni-347168" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (12.42s)

                                                
                                    

Test pass (236/305)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 10.43
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.07
10 TestDownloadOnly/v1.28.4/json-events 5.85
11 TestDownloadOnly/v1.28.4/preload-exists 0
15 TestDownloadOnly/v1.28.4/LogsDuration 0.07
17 TestDownloadOnly/v1.29.0-rc.1/json-events 5.51
18 TestDownloadOnly/v1.29.0-rc.1/preload-exists 0
22 TestDownloadOnly/v1.29.0-rc.1/LogsDuration 0.08
23 TestDownloadOnly/DeleteAll 0.15
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.14
26 TestBinaryMirror 0.58
27 TestOffline 110.6
30 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
31 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
32 TestAddons/Setup 153.59
34 TestAddons/parallel/Registry 15.07
36 TestAddons/parallel/InspektorGadget 11.19
37 TestAddons/parallel/MetricsServer 6.12
38 TestAddons/parallel/HelmTiller 11.15
40 TestAddons/parallel/CSI 76.4
41 TestAddons/parallel/Headlamp 17.53
42 TestAddons/parallel/CloudSpanner 5.77
43 TestAddons/parallel/LocalPath 57.6
44 TestAddons/parallel/NvidiaDevicePlugin 5.7
47 TestAddons/serial/GCPAuth/Namespaces 0.12
49 TestCertOptions 68.33
50 TestCertExpiration 330.43
52 TestForceSystemdFlag 103.14
53 TestForceSystemdEnv 85.93
55 TestKVMDriverInstallOrUpdate 2.98
59 TestErrorSpam/setup 47.75
60 TestErrorSpam/start 0.39
61 TestErrorSpam/status 0.82
62 TestErrorSpam/pause 1.61
63 TestErrorSpam/unpause 1.81
64 TestErrorSpam/stop 2.27
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 76.94
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 35.64
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.08
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.29
76 TestFunctional/serial/CacheCmd/cache/add_local 1.45
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
78 TestFunctional/serial/CacheCmd/cache/list 0.06
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.25
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.7
81 TestFunctional/serial/CacheCmd/cache/delete 0.12
82 TestFunctional/serial/MinikubeKubectlCmd 0.13
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
84 TestFunctional/serial/ExtraConfig 35.63
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.65
87 TestFunctional/serial/LogsFileCmd 1.63
88 TestFunctional/serial/InvalidService 4.13
90 TestFunctional/parallel/ConfigCmd 0.46
91 TestFunctional/parallel/DashboardCmd 26.51
92 TestFunctional/parallel/DryRun 0.35
93 TestFunctional/parallel/InternationalLanguage 0.29
94 TestFunctional/parallel/StatusCmd 1.29
98 TestFunctional/parallel/ServiceCmdConnect 10.65
99 TestFunctional/parallel/AddonsCmd 0.2
100 TestFunctional/parallel/PersistentVolumeClaim 49.04
102 TestFunctional/parallel/SSHCmd 0.49
103 TestFunctional/parallel/CpCmd 1.06
104 TestFunctional/parallel/MySQL 27.79
105 TestFunctional/parallel/FileSync 0.26
106 TestFunctional/parallel/CertSync 1.74
110 TestFunctional/parallel/NodeLabels 0.08
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.51
114 TestFunctional/parallel/License 0.23
115 TestFunctional/parallel/ServiceCmd/DeployApp 13.24
116 TestFunctional/parallel/Version/short 0.06
117 TestFunctional/parallel/Version/components 1.27
118 TestFunctional/parallel/ImageCommands/ImageListShort 0.3
119 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
120 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
121 TestFunctional/parallel/ImageCommands/ImageListYaml 0.29
122 TestFunctional/parallel/ImageCommands/ImageBuild 2.68
123 TestFunctional/parallel/ImageCommands/Setup 0.91
124 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
125 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
126 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.12
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.59
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.66
130 TestFunctional/parallel/ServiceCmd/List 0.39
131 TestFunctional/parallel/ServiceCmd/JSONOutput 0.35
132 TestFunctional/parallel/ServiceCmd/HTTPS 0.48
133 TestFunctional/parallel/ServiceCmd/Format 0.45
134 TestFunctional/parallel/ProfileCmd/profile_not_create 0.36
135 TestFunctional/parallel/ServiceCmd/URL 0.36
136 TestFunctional/parallel/ProfileCmd/profile_list 0.43
137 TestFunctional/parallel/MountCmd/any-port 23.23
139 TestFunctional/parallel/ProfileCmd/profile_json_output 0.4
140 TestFunctional/parallel/ImageCommands/ImageRemove 0.77
142 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.33
143 TestFunctional/parallel/MountCmd/specific-port 1.75
144 TestFunctional/parallel/MountCmd/VerifyCleanup 1.52
154 TestFunctional/delete_addon-resizer_images 0.07
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
160 TestIngressAddonLegacy/StartLegacyK8sCluster 110.79
162 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 14.08
163 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.65
167 TestJSONOutput/start/Command 60.74
168 TestJSONOutput/start/Audit 0
170 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
171 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
173 TestJSONOutput/pause/Command 0.73
174 TestJSONOutput/pause/Audit 0
176 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
177 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
179 TestJSONOutput/unpause/Command 0.67
180 TestJSONOutput/unpause/Audit 0
182 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
185 TestJSONOutput/stop/Command 9.11
186 TestJSONOutput/stop/Audit 0
188 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
190 TestErrorJSONOutput 0.23
195 TestMainNoArgs 0.06
196 TestMinikubeProfile 96.13
199 TestMountStart/serial/StartWithMountFirst 27.07
200 TestMountStart/serial/VerifyMountFirst 0.4
201 TestMountStart/serial/StartWithMountSecond 27.29
202 TestMountStart/serial/VerifyMountSecond 0.41
203 TestMountStart/serial/DeleteFirst 0.7
204 TestMountStart/serial/VerifyMountPostDelete 0.41
205 TestMountStart/serial/Stop 1.22
206 TestMountStart/serial/RestartStopped 21.65
207 TestMountStart/serial/VerifyMountPostStop 0.42
210 TestMultiNode/serial/FreshStart2Nodes 112.59
211 TestMultiNode/serial/DeployApp2Nodes 4.6
213 TestMultiNode/serial/AddNode 42.14
214 TestMultiNode/serial/MultiNodeLabels 0.06
215 TestMultiNode/serial/ProfileList 0.22
216 TestMultiNode/serial/CopyFile 7.84
217 TestMultiNode/serial/StopNode 3.01
218 TestMultiNode/serial/StartAfterStop 29.9
220 TestMultiNode/serial/DeleteNode 1.78
222 TestMultiNode/serial/RestartMultiNode 445.36
223 TestMultiNode/serial/ValidateNameConflict 48.31
230 TestScheduledStopUnix 118.49
236 TestKubernetesUpgrade 203.16
239 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
243 TestNoKubernetes/serial/StartWithK8s 111.32
248 TestNetworkPlugins/group/false 3.41
252 TestStoppedBinaryUpgrade/Setup 0.29
254 TestNoKubernetes/serial/StartWithStopK8s 7.56
255 TestNoKubernetes/serial/Start 28.84
256 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
257 TestNoKubernetes/serial/ProfileList 0.86
258 TestNoKubernetes/serial/Stop 1.49
259 TestNoKubernetes/serial/StartNoArgs 26.18
260 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.25
269 TestPause/serial/Start 65.81
270 TestPause/serial/SecondStartNoReconfiguration 63.99
271 TestNetworkPlugins/group/auto/Start 131.31
272 TestStoppedBinaryUpgrade/MinikubeLogs 0.42
273 TestNetworkPlugins/group/kindnet/Start 101.27
274 TestPause/serial/Pause 0.83
275 TestPause/serial/VerifyStatus 0.3
276 TestPause/serial/Unpause 0.88
277 TestPause/serial/PauseAgain 1.41
278 TestPause/serial/DeletePaused 1.09
279 TestPause/serial/VerifyDeletedResources 13.8
280 TestNetworkPlugins/group/calico/Start 110.87
281 TestNetworkPlugins/group/kindnet/ControllerPod 5.03
282 TestNetworkPlugins/group/custom-flannel/Start 90.95
283 TestNetworkPlugins/group/kindnet/KubeletFlags 0.27
284 TestNetworkPlugins/group/auto/KubeletFlags 0.28
285 TestNetworkPlugins/group/kindnet/NetCatPod 13.47
286 TestNetworkPlugins/group/auto/NetCatPod 13.56
287 TestNetworkPlugins/group/kindnet/DNS 0.19
288 TestNetworkPlugins/group/kindnet/Localhost 0.16
289 TestNetworkPlugins/group/auto/DNS 0.22
290 TestNetworkPlugins/group/kindnet/HairPin 0.18
291 TestNetworkPlugins/group/auto/Localhost 0.17
292 TestNetworkPlugins/group/auto/HairPin 0.18
293 TestNetworkPlugins/group/enable-default-cni/Start 68.07
294 TestNetworkPlugins/group/flannel/Start 120.14
295 TestNetworkPlugins/group/calico/ControllerPod 5.04
296 TestNetworkPlugins/group/calico/KubeletFlags 0.23
297 TestNetworkPlugins/group/calico/NetCatPod 12.4
298 TestNetworkPlugins/group/calico/DNS 0.3
299 TestNetworkPlugins/group/calico/Localhost 0.22
300 TestNetworkPlugins/group/calico/HairPin 0.19
301 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.3
302 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.33
303 TestNetworkPlugins/group/bridge/Start 112.53
304 TestNetworkPlugins/group/custom-flannel/DNS 0.25
305 TestNetworkPlugins/group/custom-flannel/Localhost 0.2
306 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
307 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.27
308 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.38
309 TestNetworkPlugins/group/enable-default-cni/DNS 26.98
311 TestStartStop/group/old-k8s-version/serial/FirstStart 148.38
312 TestNetworkPlugins/group/enable-default-cni/Localhost 0.2
313 TestNetworkPlugins/group/enable-default-cni/HairPin 0.2
314 TestNetworkPlugins/group/flannel/ControllerPod 5.19
316 TestStartStop/group/no-preload/serial/FirstStart 96.57
317 TestNetworkPlugins/group/flannel/KubeletFlags 0.32
318 TestNetworkPlugins/group/flannel/NetCatPod 13.02
319 TestNetworkPlugins/group/flannel/DNS 0.22
320 TestNetworkPlugins/group/flannel/Localhost 0.21
321 TestNetworkPlugins/group/flannel/HairPin 0.2
323 TestStartStop/group/embed-certs/serial/FirstStart 108.46
324 TestNetworkPlugins/group/bridge/KubeletFlags 0.24
325 TestNetworkPlugins/group/bridge/NetCatPod 13.35
326 TestNetworkPlugins/group/bridge/DNS 0.19
327 TestNetworkPlugins/group/bridge/Localhost 0.15
328 TestNetworkPlugins/group/bridge/HairPin 0.15
330 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 64.39
331 TestStartStop/group/no-preload/serial/DeployApp 9.97
332 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.2
334 TestStartStop/group/old-k8s-version/serial/DeployApp 10.5
335 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.07
337 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.45
338 TestStartStop/group/embed-certs/serial/DeployApp 9.46
339 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.12
341 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.21
345 TestStartStop/group/no-preload/serial/SecondStart 671.9
346 TestStartStop/group/old-k8s-version/serial/SecondStart 701.05
349 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 604.69
350 TestStartStop/group/embed-certs/serial/SecondStart 622.15
360 TestStartStop/group/newest-cni/serial/FirstStart 59.36
361 TestStartStop/group/newest-cni/serial/DeployApp 0
362 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.63
365 TestStartStop/group/newest-cni/serial/SecondStart 333.23
366 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
367 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
368 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
369 TestStartStop/group/newest-cni/serial/Pause 2.5
x
+
TestDownloadOnly/v1.16.0/json-events (10.43s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-324691 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-324691 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (10.433467771s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (10.43s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-324691
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-324691: exit status 85 (74.254013ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-324691 | jenkins | v1.32.0 | 06 Dec 23 18:40 UTC |          |
	|         | -p download-only-324691        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/06 18:40:26
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 18:40:26.023857   70846 out.go:296] Setting OutFile to fd 1 ...
	I1206 18:40:26.024114   70846 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 18:40:26.024122   70846 out.go:309] Setting ErrFile to fd 2...
	I1206 18:40:26.024127   70846 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 18:40:26.024321   70846 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17740-63652/.minikube/bin
	W1206 18:40:26.024443   70846 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17740-63652/.minikube/config/config.json: open /home/jenkins/minikube-integration/17740-63652/.minikube/config/config.json: no such file or directory
	I1206 18:40:26.025049   70846 out.go:303] Setting JSON to true
	I1206 18:40:26.025978   70846 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":4976,"bootTime":1701883050,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 18:40:26.026039   70846 start.go:138] virtualization: kvm guest
	I1206 18:40:26.028882   70846 out.go:97] [download-only-324691] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	W1206 18:40:26.029020   70846 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17740-63652/.minikube/cache/preloaded-tarball: no such file or directory
	I1206 18:40:26.029067   70846 notify.go:220] Checking for updates...
	I1206 18:40:26.030561   70846 out.go:169] MINIKUBE_LOCATION=17740
	I1206 18:40:26.032168   70846 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 18:40:26.033550   70846 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17740-63652/kubeconfig
	I1206 18:40:26.034955   70846 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17740-63652/.minikube
	I1206 18:40:26.036268   70846 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1206 18:40:26.038699   70846 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1206 18:40:26.038998   70846 driver.go:392] Setting default libvirt URI to qemu:///system
	I1206 18:40:26.074743   70846 out.go:97] Using the kvm2 driver based on user configuration
	I1206 18:40:26.074777   70846 start.go:298] selected driver: kvm2
	I1206 18:40:26.074782   70846 start.go:902] validating driver "kvm2" against <nil>
	I1206 18:40:26.075170   70846 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 18:40:26.075251   70846 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17740-63652/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1206 18:40:26.090681   70846 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1206 18:40:26.090755   70846 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1206 18:40:26.091236   70846 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1206 18:40:26.091392   70846 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1206 18:40:26.091487   70846 cni.go:84] Creating CNI manager for ""
	I1206 18:40:26.091500   70846 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 18:40:26.091510   70846 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1206 18:40:26.091518   70846 start_flags.go:323] config:
	{Name:download-only-324691 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-324691 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 18:40:26.091742   70846 iso.go:125] acquiring lock: {Name:mk6e9c7dc90243dab7d2a6f322b4b6abe4dff6ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 18:40:26.093696   70846 out.go:97] Downloading VM boot image ...
	I1206 18:40:26.093727   70846 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso.sha256 -> /home/jenkins/minikube-integration/17740-63652/.minikube/cache/iso/amd64/minikube-v1.32.1-1701387192-17703-amd64.iso
	I1206 18:40:28.499176   70846 out.go:97] Starting control plane node download-only-324691 in cluster download-only-324691
	I1206 18:40:28.499211   70846 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1206 18:40:28.533156   70846 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I1206 18:40:28.533198   70846 cache.go:56] Caching tarball of preloaded images
	I1206 18:40:28.533398   70846 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1206 18:40:28.535472   70846 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1206 18:40:28.535512   70846 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I1206 18:40:28.571551   70846 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:432b600409d778ea7a21214e83948570 -> /home/jenkins/minikube-integration/17740-63652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I1206 18:40:32.243565   70846 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I1206 18:40:32.243676   70846 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17740-63652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I1206 18:40:33.119641   70846 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I1206 18:40:33.120014   70846 profile.go:148] Saving config to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/download-only-324691/config.json ...
	I1206 18:40:33.120043   70846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/download-only-324691/config.json: {Name:mkb28b4172a52defcb9f5f5eb6ab79d2206ebd35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:40:33.120222   70846 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1206 18:40:33.120519   70846 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/17740-63652/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-324691"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (5.85s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-324691 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-324691 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (5.850122499s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (5.85s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-324691
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-324691: exit status 85 (74.408532ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-324691 | jenkins | v1.32.0 | 06 Dec 23 18:40 UTC |          |
	|         | -p download-only-324691        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-324691 | jenkins | v1.32.0 | 06 Dec 23 18:40 UTC |          |
	|         | -p download-only-324691        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/06 18:40:36
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 18:40:36.534558   70904 out.go:296] Setting OutFile to fd 1 ...
	I1206 18:40:36.534730   70904 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 18:40:36.534741   70904 out.go:309] Setting ErrFile to fd 2...
	I1206 18:40:36.534748   70904 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 18:40:36.534938   70904 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17740-63652/.minikube/bin
	W1206 18:40:36.535083   70904 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17740-63652/.minikube/config/config.json: open /home/jenkins/minikube-integration/17740-63652/.minikube/config/config.json: no such file or directory
	I1206 18:40:36.535533   70904 out.go:303] Setting JSON to true
	I1206 18:40:36.536436   70904 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":4987,"bootTime":1701883050,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 18:40:36.536498   70904 start.go:138] virtualization: kvm guest
	I1206 18:40:36.538991   70904 out.go:97] [download-only-324691] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1206 18:40:36.540753   70904 out.go:169] MINIKUBE_LOCATION=17740
	I1206 18:40:36.539217   70904 notify.go:220] Checking for updates...
	I1206 18:40:36.543990   70904 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 18:40:36.545648   70904 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17740-63652/kubeconfig
	I1206 18:40:36.546993   70904 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17740-63652/.minikube
	I1206 18:40:36.548346   70904 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1206 18:40:36.550988   70904 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1206 18:40:36.551485   70904 config.go:182] Loaded profile config "download-only-324691": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W1206 18:40:36.551536   70904 start.go:810] api.Load failed for download-only-324691: filestore "download-only-324691": Docker machine "download-only-324691" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1206 18:40:36.551646   70904 driver.go:392] Setting default libvirt URI to qemu:///system
	W1206 18:40:36.551699   70904 start.go:810] api.Load failed for download-only-324691: filestore "download-only-324691": Docker machine "download-only-324691" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1206 18:40:36.583592   70904 out.go:97] Using the kvm2 driver based on existing profile
	I1206 18:40:36.583617   70904 start.go:298] selected driver: kvm2
	I1206 18:40:36.583624   70904 start.go:902] validating driver "kvm2" against &{Name:download-only-324691 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.16.0 ClusterName:download-only-324691 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 18:40:36.584054   70904 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 18:40:36.584135   70904 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17740-63652/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1206 18:40:36.598654   70904 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1206 18:40:36.599738   70904 cni.go:84] Creating CNI manager for ""
	I1206 18:40:36.599764   70904 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 18:40:36.599785   70904 start_flags.go:323] config:
	{Name:download-only-324691 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-324691 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 18:40:36.599998   70904 iso.go:125] acquiring lock: {Name:mk6e9c7dc90243dab7d2a6f322b4b6abe4dff6ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 18:40:36.601757   70904 out.go:97] Starting control plane node download-only-324691 in cluster download-only-324691
	I1206 18:40:36.601775   70904 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1206 18:40:36.632037   70904 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1206 18:40:36.632072   70904 cache.go:56] Caching tarball of preloaded images
	I1206 18:40:36.632254   70904 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1206 18:40:36.634287   70904 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I1206 18:40:36.634307   70904 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I1206 18:40:36.666084   70904 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b0bd7b3b222c094c365d9c9e10e48fc7 -> /home/jenkins/minikube-integration/17740-63652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1206 18:40:40.796471   70904 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I1206 18:40:40.796577   70904 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17740-63652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-324691"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/json-events (5.51s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-324691 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-324691 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (5.507062862s)
--- PASS: TestDownloadOnly/v1.29.0-rc.1/json-events (5.51s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-324691
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-324691: exit status 85 (76.118393ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-324691 | jenkins | v1.32.0 | 06 Dec 23 18:40 UTC |          |
	|         | -p download-only-324691           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=kvm2                     |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-324691 | jenkins | v1.32.0 | 06 Dec 23 18:40 UTC |          |
	|         | -p download-only-324691           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=kvm2                     |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-324691 | jenkins | v1.32.0 | 06 Dec 23 18:40 UTC |          |
	|         | -p download-only-324691           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.29.0-rc.1 |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=kvm2                     |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/06 18:40:42
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 18:40:42.462829   70952 out.go:296] Setting OutFile to fd 1 ...
	I1206 18:40:42.462994   70952 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 18:40:42.463003   70952 out.go:309] Setting ErrFile to fd 2...
	I1206 18:40:42.463008   70952 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 18:40:42.463197   70952 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17740-63652/.minikube/bin
	W1206 18:40:42.463304   70952 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17740-63652/.minikube/config/config.json: open /home/jenkins/minikube-integration/17740-63652/.minikube/config/config.json: no such file or directory
	I1206 18:40:42.463735   70952 out.go:303] Setting JSON to true
	I1206 18:40:42.464646   70952 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":4992,"bootTime":1701883050,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 18:40:42.464717   70952 start.go:138] virtualization: kvm guest
	I1206 18:40:42.466679   70952 out.go:97] [download-only-324691] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1206 18:40:42.468427   70952 out.go:169] MINIKUBE_LOCATION=17740
	I1206 18:40:42.466890   70952 notify.go:220] Checking for updates...
	I1206 18:40:42.471116   70952 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 18:40:42.472540   70952 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17740-63652/kubeconfig
	I1206 18:40:42.474052   70952 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17740-63652/.minikube
	I1206 18:40:42.475429   70952 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1206 18:40:42.478068   70952 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1206 18:40:42.478783   70952 config.go:182] Loaded profile config "download-only-324691": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	W1206 18:40:42.478839   70952 start.go:810] api.Load failed for download-only-324691: filestore "download-only-324691": Docker machine "download-only-324691" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1206 18:40:42.478963   70952 driver.go:392] Setting default libvirt URI to qemu:///system
	W1206 18:40:42.479007   70952 start.go:810] api.Load failed for download-only-324691: filestore "download-only-324691": Docker machine "download-only-324691" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1206 18:40:42.511233   70952 out.go:97] Using the kvm2 driver based on existing profile
	I1206 18:40:42.511261   70952 start.go:298] selected driver: kvm2
	I1206 18:40:42.511267   70952 start.go:902] validating driver "kvm2" against &{Name:download-only-324691 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.4 ClusterName:download-only-324691 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 18:40:42.511707   70952 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 18:40:42.511778   70952 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17740-63652/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1206 18:40:42.526254   70952 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1206 18:40:42.527043   70952 cni.go:84] Creating CNI manager for ""
	I1206 18:40:42.527060   70952 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 18:40:42.527074   70952 start_flags.go:323] config:
	{Name:download-only-324691 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.1 ClusterName:download-only-324691 Namespace
:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 18:40:42.527214   70952 iso.go:125] acquiring lock: {Name:mk6e9c7dc90243dab7d2a6f322b4b6abe4dff6ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 18:40:42.529112   70952 out.go:97] Starting control plane node download-only-324691 in cluster download-only-324691
	I1206 18:40:42.529126   70952 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime crio
	I1206 18:40:42.567625   70952 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.1/preloaded-images-k8s-v18-v1.29.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I1206 18:40:42.567662   70952 cache.go:56] Caching tarball of preloaded images
	I1206 18:40:42.567843   70952 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime crio
	I1206 18:40:42.569803   70952 out.go:97] Downloading Kubernetes v1.29.0-rc.1 preload ...
	I1206 18:40:42.569834   70952 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.1-cri-o-overlay-amd64.tar.lz4 ...
	I1206 18:40:42.605842   70952 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.1/preloaded-images-k8s-v18-v1.29.0-rc.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:26a42be529125e55182ed93a618b213b -> /home/jenkins/minikube-integration/17740-63652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I1206 18:40:46.502613   70952 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.1-cri-o-overlay-amd64.tar.lz4 ...
	I1206 18:40:46.502710   70952 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17740-63652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.1-cri-o-overlay-amd64.tar.lz4 ...
	I1206 18:40:47.296221   70952 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.1 on crio
	I1206 18:40:47.296376   70952 profile.go:148] Saving config to /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/download-only-324691/config.json ...
	I1206 18:40:47.296584   70952 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime crio
	I1206 18:40:47.296780   70952 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.1/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/17740-63652/.minikube/cache/linux/amd64/v1.29.0-rc.1/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-324691"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-324691
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-106585 --alsologtostderr --binary-mirror http://127.0.0.1:39041 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-106585" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-106585
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestOffline (110.6s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-383530 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-383530 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m49.407778806s)
helpers_test.go:175: Cleaning up "offline-crio-383530" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-383530
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-383530: (1.187421124s)
--- PASS: TestOffline (110.60s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-463584
addons_test.go:927: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-463584: exit status 85 (66.995849ms)

                                                
                                                
-- stdout --
	* Profile "addons-463584" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-463584"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-463584
addons_test.go:938: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-463584: exit status 85 (67.589233ms)

                                                
                                                
-- stdout --
	* Profile "addons-463584" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-463584"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (153.59s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-463584 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-463584 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m33.590913424s)
--- PASS: TestAddons/Setup (153.59s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.07s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 34.299936ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6thb7" [0b3afa89-7f8a-4644-963a-c31b40f2a80d] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.025418566s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-smdkb" [e907c989-b9af-4449-a8a9-628e470fe380] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.014129886s
addons_test.go:339: (dbg) Run:  kubectl --context addons-463584 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-463584 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-463584 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.102141589s)
addons_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p addons-463584 ip
2023/12/06 18:43:37 [DEBUG] GET http://192.168.39.94:5000
addons_test.go:387: (dbg) Run:  out/minikube-linux-amd64 -p addons-463584 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.07s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.19s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-kw9fl" [8302781a-631d-4216-a164-cbbd12bd9bae] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.017299502s
addons_test.go:840: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-463584
addons_test.go:840: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-463584: (6.171715564s)
--- PASS: TestAddons/parallel/InspektorGadget (11.19s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.12s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 34.12425ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-c2xz8" [613bd5aa-3d2c-4f94-8aa2-48ed4494f773] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.039014608s
addons_test.go:414: (dbg) Run:  kubectl --context addons-463584 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-linux-amd64 -p addons-463584 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.12s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.15s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:455: tiller-deploy stabilized in 3.638308ms
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-sjrq2" [c3367096-b874-410e-ad47-aa17ee4de5b2] Running
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.018743636s
addons_test.go:472: (dbg) Run:  kubectl --context addons-463584 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:472: (dbg) Done: kubectl --context addons-463584 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.456037959s)
addons_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p addons-463584 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.15s)

                                                
                                    
x
+
TestAddons/parallel/CSI (76.4s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 37.568002ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-463584 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463584 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463584 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-463584 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [06da14bf-4df0-4aa8-b03e-eed70a8f8c73] Pending
helpers_test.go:344: "task-pv-pod" [06da14bf-4df0-4aa8-b03e-eed70a8f8c73] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [06da14bf-4df0-4aa8-b03e-eed70a8f8c73] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.025923446s
addons_test.go:583: (dbg) Run:  kubectl --context addons-463584 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-463584 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-463584 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-463584 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-463584 delete pod task-pv-pod
addons_test.go:593: (dbg) Done: kubectl --context addons-463584 delete pod task-pv-pod: (1.435849982s)
addons_test.go:599: (dbg) Run:  kubectl --context addons-463584 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-463584 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463584 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463584 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463584 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-463584 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [5aab04c7-0bee-4c55-9202-6570d468c167] Pending
helpers_test.go:344: "task-pv-pod-restore" [5aab04c7-0bee-4c55-9202-6570d468c167] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [5aab04c7-0bee-4c55-9202-6570d468c167] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.023353035s
addons_test.go:625: (dbg) Run:  kubectl --context addons-463584 delete pod task-pv-pod-restore
addons_test.go:629: (dbg) Run:  kubectl --context addons-463584 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-463584 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-linux-amd64 -p addons-463584 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-linux-amd64 -p addons-463584 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.883204387s)
addons_test.go:641: (dbg) Run:  out/minikube-linux-amd64 -p addons-463584 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:641: (dbg) Done: out/minikube-linux-amd64 -p addons-463584 addons disable volumesnapshots --alsologtostderr -v=1: (1.041932394s)
--- PASS: TestAddons/parallel/CSI (76.40s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.53s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-463584 --alsologtostderr -v=1
addons_test.go:823: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-463584 --alsologtostderr -v=1: (1.510045681s)
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-777fd4b855-4mzzg" [4a654f18-b61c-4682-9ab4-a722d11bc12e] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-777fd4b855-4mzzg" [4a654f18-b61c-4682-9ab4-a722d11bc12e] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 16.023707299s
--- PASS: TestAddons/parallel/Headlamp (17.53s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.77s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5649c69bf6-bqknc" [698f910c-d5fd-438d-8e24-ffe7a778620f] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.010737782s
addons_test.go:859: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-463584
--- PASS: TestAddons/parallel/CloudSpanner (5.77s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (57.6s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-463584 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-463584 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463584 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463584 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463584 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463584 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463584 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463584 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463584 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463584 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [03b237df-44cb-4a14-a2b7-1ba111b6e820] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [03b237df-44cb-4a14-a2b7-1ba111b6e820] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [03b237df-44cb-4a14-a2b7-1ba111b6e820] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.016905582s
addons_test.go:890: (dbg) Run:  kubectl --context addons-463584 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-linux-amd64 -p addons-463584 ssh "cat /opt/local-path-provisioner/pvc-f2e7e006-6181-4bbb-9764-32096133f2ae_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-463584 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-463584 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-linux-amd64 -p addons-463584 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:919: (dbg) Done: out/minikube-linux-amd64 -p addons-463584 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.907649638s)
--- PASS: TestAddons/parallel/LocalPath (57.60s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.7s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-7xjql" [6bad12d8-6f03-44fe-9b7e-a74f9991b664] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.049833743s
addons_test.go:954: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-463584
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.70s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-463584 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-463584 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestCertOptions (68.33s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-662170 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-662170 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m6.789252575s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-662170 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-662170 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-662170 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-662170" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-662170
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-662170: (1.037838777s)
--- PASS: TestCertOptions (68.33s)

                                                
                                    
x
+
TestCertExpiration (330.43s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-602842 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-602842 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m10.453512529s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-602842 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-602842 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (1m18.901115816s)
helpers_test.go:175: Cleaning up "cert-expiration-602842" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-602842
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-602842: (1.076982084s)
--- PASS: TestCertExpiration (330.43s)

                                                
                                    
x
+
TestForceSystemdFlag (103.14s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-918492 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E1206 19:38:22.657696   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/addons-463584/client.crt: no such file or directory
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-918492 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m41.907463265s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-918492 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-918492" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-918492
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-918492: (1.021123909s)
--- PASS: TestForceSystemdFlag (103.14s)

                                                
                                    
x
+
TestForceSystemdEnv (85.93s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-443622 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-443622 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m24.891965413s)
helpers_test.go:175: Cleaning up "force-systemd-env-443622" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-443622
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-443622: (1.035108901s)
--- PASS: TestForceSystemdEnv (85.93s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (2.98s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (2.98s)

                                                
                                    
x
+
TestErrorSpam/setup (47.75s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-834904 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-834904 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-834904 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-834904 --driver=kvm2  --container-runtime=crio: (47.748651649s)
--- PASS: TestErrorSpam/setup (47.75s)

                                                
                                    
x
+
TestErrorSpam/start (0.39s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-834904 --log_dir /tmp/nospam-834904 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-834904 --log_dir /tmp/nospam-834904 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-834904 --log_dir /tmp/nospam-834904 start --dry-run
--- PASS: TestErrorSpam/start (0.39s)

                                                
                                    
x
+
TestErrorSpam/status (0.82s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-834904 --log_dir /tmp/nospam-834904 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-834904 --log_dir /tmp/nospam-834904 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-834904 --log_dir /tmp/nospam-834904 status
--- PASS: TestErrorSpam/status (0.82s)

                                                
                                    
x
+
TestErrorSpam/pause (1.61s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-834904 --log_dir /tmp/nospam-834904 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-834904 --log_dir /tmp/nospam-834904 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-834904 --log_dir /tmp/nospam-834904 pause
--- PASS: TestErrorSpam/pause (1.61s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.81s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-834904 --log_dir /tmp/nospam-834904 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-834904 --log_dir /tmp/nospam-834904 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-834904 --log_dir /tmp/nospam-834904 unpause
--- PASS: TestErrorSpam/unpause (1.81s)

                                                
                                    
x
+
TestErrorSpam/stop (2.27s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-834904 --log_dir /tmp/nospam-834904 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-834904 --log_dir /tmp/nospam-834904 stop: (2.095792643s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-834904 --log_dir /tmp/nospam-834904 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-834904 --log_dir /tmp/nospam-834904 stop
--- PASS: TestErrorSpam/stop (2.27s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17740-63652/.minikube/files/etc/test/nested/copy/70834/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (76.94s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-317483 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-317483 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m16.941599133s)
--- PASS: TestFunctional/serial/StartWithProxy (76.94s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (35.64s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-317483 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-317483 --alsologtostderr -v=8: (35.637405313s)
functional_test.go:659: soft start took 35.638147774s for "functional-317483" cluster.
--- PASS: TestFunctional/serial/SoftStart (35.64s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-317483 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-317483 cache add registry.k8s.io/pause:3.1: (1.028417478s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-317483 cache add registry.k8s.io/pause:3.3: (1.166594312s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-317483 cache add registry.k8s.io/pause:latest: (1.096955415s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.45s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-317483 /tmp/TestFunctionalserialCacheCmdcacheadd_local1807017503/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 cache add minikube-local-cache-test:functional-317483
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-317483 cache add minikube-local-cache-test:functional-317483: (1.117952988s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 cache delete minikube-local-cache-test:functional-317483
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-317483
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.45s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.7s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-317483 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (233.477381ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.70s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 kubectl -- --context functional-317483 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-317483 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (35.63s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-317483 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-317483 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (35.627224784s)
functional_test.go:757: restart took 35.627397729s for "functional-317483" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (35.63s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-317483 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.65s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-317483 logs: (1.654028417s)
--- PASS: TestFunctional/serial/LogsCmd (1.65s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.63s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 logs --file /tmp/TestFunctionalserialLogsFileCmd2753456126/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-317483 logs --file /tmp/TestFunctionalserialLogsFileCmd2753456126/001/logs.txt: (1.62464526s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.63s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.13s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-317483 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-317483
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-317483: exit status 115 (313.122276ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.65:32715 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-317483 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.13s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-317483 config get cpus: exit status 14 (80.321557ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-317483 config get cpus: exit status 14 (62.234145ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (26.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-317483 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-317483 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 78316: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (26.51s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-317483 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-317483 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (187.001707ms)

                                                
                                                
-- stdout --
	* [functional-317483] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17740
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17740-63652/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17740-63652/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 18:53:10.658929   77987 out.go:296] Setting OutFile to fd 1 ...
	I1206 18:53:10.659235   77987 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 18:53:10.659247   77987 out.go:309] Setting ErrFile to fd 2...
	I1206 18:53:10.659255   77987 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 18:53:10.659575   77987 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17740-63652/.minikube/bin
	I1206 18:53:10.660242   77987 out.go:303] Setting JSON to false
	I1206 18:53:10.661409   77987 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":5741,"bootTime":1701883050,"procs":233,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 18:53:10.661591   77987 start.go:138] virtualization: kvm guest
	I1206 18:53:10.664155   77987 out.go:177] * [functional-317483] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1206 18:53:10.666925   77987 out.go:177]   - MINIKUBE_LOCATION=17740
	I1206 18:53:10.666884   77987 notify.go:220] Checking for updates...
	I1206 18:53:10.669218   77987 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 18:53:10.670654   77987 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17740-63652/kubeconfig
	I1206 18:53:10.671985   77987 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17740-63652/.minikube
	I1206 18:53:10.676041   77987 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 18:53:10.679877   77987 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 18:53:10.690027   77987 config.go:182] Loaded profile config "functional-317483": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 18:53:10.690731   77987 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 18:53:10.690801   77987 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 18:53:10.712752   77987 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38589
	I1206 18:53:10.713227   77987 main.go:141] libmachine: () Calling .GetVersion
	I1206 18:53:10.713909   77987 main.go:141] libmachine: Using API Version  1
	I1206 18:53:10.713976   77987 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 18:53:10.714527   77987 main.go:141] libmachine: () Calling .GetMachineName
	I1206 18:53:10.714742   77987 main.go:141] libmachine: (functional-317483) Calling .DriverName
	I1206 18:53:10.714982   77987 driver.go:392] Setting default libvirt URI to qemu:///system
	I1206 18:53:10.715274   77987 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 18:53:10.715321   77987 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 18:53:10.730562   77987 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42575
	I1206 18:53:10.731038   77987 main.go:141] libmachine: () Calling .GetVersion
	I1206 18:53:10.731602   77987 main.go:141] libmachine: Using API Version  1
	I1206 18:53:10.731634   77987 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 18:53:10.732000   77987 main.go:141] libmachine: () Calling .GetMachineName
	I1206 18:53:10.732192   77987 main.go:141] libmachine: (functional-317483) Calling .DriverName
	I1206 18:53:10.766105   77987 out.go:177] * Using the kvm2 driver based on existing profile
	I1206 18:53:10.767710   77987 start.go:298] selected driver: kvm2
	I1206 18:53:10.767729   77987 start.go:902] validating driver "kvm2" against &{Name:functional-317483 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-317483 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.65 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 18:53:10.767894   77987 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 18:53:10.770397   77987 out.go:177] 
	W1206 18:53:10.771985   77987 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1206 18:53:10.773434   77987 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-317483 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-317483 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-317483 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (290.208922ms)

                                                
                                                
-- stdout --
	* [functional-317483] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17740
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17740-63652/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17740-63652/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 18:53:11.013181   78076 out.go:296] Setting OutFile to fd 1 ...
	I1206 18:53:11.013478   78076 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 18:53:11.013489   78076 out.go:309] Setting ErrFile to fd 2...
	I1206 18:53:11.013494   78076 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 18:53:11.013794   78076 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17740-63652/.minikube/bin
	I1206 18:53:11.014342   78076 out.go:303] Setting JSON to false
	I1206 18:53:11.015347   78076 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":5741,"bootTime":1701883050,"procs":241,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 18:53:11.015420   78076 start.go:138] virtualization: kvm guest
	I1206 18:53:11.017736   78076 out.go:177] * [functional-317483] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I1206 18:53:11.019219   78076 out.go:177]   - MINIKUBE_LOCATION=17740
	I1206 18:53:11.019300   78076 notify.go:220] Checking for updates...
	I1206 18:53:11.020720   78076 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 18:53:11.022334   78076 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17740-63652/kubeconfig
	I1206 18:53:11.023871   78076 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17740-63652/.minikube
	I1206 18:53:11.025376   78076 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 18:53:11.026775   78076 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 18:53:11.028763   78076 config.go:182] Loaded profile config "functional-317483": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 18:53:11.029462   78076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 18:53:11.029526   78076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 18:53:11.044974   78076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34745
	I1206 18:53:11.045534   78076 main.go:141] libmachine: () Calling .GetVersion
	I1206 18:53:11.046259   78076 main.go:141] libmachine: Using API Version  1
	I1206 18:53:11.046296   78076 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 18:53:11.046707   78076 main.go:141] libmachine: () Calling .GetMachineName
	I1206 18:53:11.046946   78076 main.go:141] libmachine: (functional-317483) Calling .DriverName
	I1206 18:53:11.047215   78076 driver.go:392] Setting default libvirt URI to qemu:///system
	I1206 18:53:11.047581   78076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 18:53:11.047630   78076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 18:53:11.064473   78076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41009
	I1206 18:53:11.064943   78076 main.go:141] libmachine: () Calling .GetVersion
	I1206 18:53:11.065412   78076 main.go:141] libmachine: Using API Version  1
	I1206 18:53:11.065437   78076 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 18:53:11.065818   78076 main.go:141] libmachine: () Calling .GetMachineName
	I1206 18:53:11.066025   78076 main.go:141] libmachine: (functional-317483) Calling .DriverName
	I1206 18:53:11.166609   78076 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1206 18:53:11.219180   78076 start.go:298] selected driver: kvm2
	I1206 18:53:11.219207   78076 start.go:902] validating driver "kvm2" against &{Name:functional-317483 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-317483 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.65 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 18:53:11.219350   78076 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 18:53:11.222311   78076 out.go:177] 
	W1206 18:53:11.223812   78076 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1206 18:53:11.225374   78076 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-317483 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-317483 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-mqcn7" [c065340b-5f5e-4de0-a9c1-160e132b952f] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-mqcn7" [c065340b-5f5e-4de0-a9c1-160e132b952f] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.039082323s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.39.65:30979
functional_test.go:1674: http://192.168.39.65:30979: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-mqcn7

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.65:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.65:30979
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.65s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (49.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [3b03fe8a-bf3c-493c-8408-a792a9547621] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.01712167s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-317483 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-317483 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-317483 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-317483 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-317483 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [eefd926c-af17-41a7-95a8-be114d2af24f] Pending
helpers_test.go:344: "sp-pod" [eefd926c-af17-41a7-95a8-be114d2af24f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [eefd926c-af17-41a7-95a8-be114d2af24f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.026705657s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-317483 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-317483 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-317483 delete -f testdata/storage-provisioner/pod.yaml: (2.834447584s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-317483 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [04f8cbc2-103b-4f87-9ee7-b94fc24ac23e] Pending
helpers_test.go:344: "sp-pod" [04f8cbc2-103b-4f87-9ee7-b94fc24ac23e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E1206 18:53:22.656885   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/addons-463584/client.crt: no such file or directory
E1206 18:53:22.662936   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/addons-463584/client.crt: no such file or directory
E1206 18:53:22.673254   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/addons-463584/client.crt: no such file or directory
E1206 18:53:22.693613   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/addons-463584/client.crt: no such file or directory
E1206 18:53:22.734019   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/addons-463584/client.crt: no such file or directory
E1206 18:53:22.815094   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/addons-463584/client.crt: no such file or directory
E1206 18:53:22.975544   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/addons-463584/client.crt: no such file or directory
E1206 18:53:23.296160   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/addons-463584/client.crt: no such file or directory
E1206 18:53:23.936375   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/addons-463584/client.crt: no such file or directory
E1206 18:53:25.216673   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/addons-463584/client.crt: no such file or directory
helpers_test.go:344: "sp-pod" [04f8cbc2-103b-4f87-9ee7-b94fc24ac23e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 23.021906281s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-317483 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (49.04s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 ssh -n functional-317483 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 cp functional-317483:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1777403286/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 ssh -n functional-317483 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (27.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-317483 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-g6rnl" [a0404a00-d389-43ea-9ddf-2fb514a8d288] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-g6rnl" [a0404a00-d389-43ea-9ddf-2fb514a8d288] Running
E1206 18:53:27.777712   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/addons-463584/client.crt: no such file or directory
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 21.044116693s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-317483 exec mysql-859648c796-g6rnl -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-317483 exec mysql-859648c796-g6rnl -- mysql -ppassword -e "show databases;": exit status 1 (339.010019ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-317483 exec mysql-859648c796-g6rnl -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-317483 exec mysql-859648c796-g6rnl -- mysql -ppassword -e "show databases;": exit status 1 (386.694517ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-317483 exec mysql-859648c796-g6rnl -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-317483 exec mysql-859648c796-g6rnl -- mysql -ppassword -e "show databases;": exit status 1 (229.887082ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-317483 exec mysql-859648c796-g6rnl -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (27.79s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/70834/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 ssh "sudo cat /etc/test/nested/copy/70834/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/70834.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 ssh "sudo cat /etc/ssl/certs/70834.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/70834.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 ssh "sudo cat /usr/share/ca-certificates/70834.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/708342.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 ssh "sudo cat /etc/ssl/certs/708342.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/708342.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 ssh "sudo cat /usr/share/ca-certificates/708342.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-317483 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-317483 ssh "sudo systemctl is-active docker": exit status 1 (273.681075ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-317483 ssh "sudo systemctl is-active containerd": exit status 1 (233.615094ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (13.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-317483 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-317483 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-nxjp8" [2d650389-b0d2-4d75-b27a-0d2f3111cb6b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-nxjp8" [2d650389-b0d2-4d75-b27a-0d2f3111cb6b] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 13.025839357s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (13.24s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-amd64 -p functional-317483 version -o=json --components: (1.271247989s)
--- PASS: TestFunctional/parallel/Version/components (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-317483 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
localhost/minikube-local-cache-test:functional-317483
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-317483 image ls --format short --alsologtostderr:
I1206 18:53:37.985511   78998 out.go:296] Setting OutFile to fd 1 ...
I1206 18:53:37.985687   78998 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1206 18:53:37.985698   78998 out.go:309] Setting ErrFile to fd 2...
I1206 18:53:37.985703   78998 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1206 18:53:37.985887   78998 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17740-63652/.minikube/bin
I1206 18:53:37.986439   78998 config.go:182] Loaded profile config "functional-317483": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1206 18:53:37.986541   78998 config.go:182] Loaded profile config "functional-317483": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1206 18:53:37.986965   78998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1206 18:53:37.987017   78998 main.go:141] libmachine: Launching plugin server for driver kvm2
I1206 18:53:38.001355   78998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46589
I1206 18:53:38.001789   78998 main.go:141] libmachine: () Calling .GetVersion
I1206 18:53:38.002348   78998 main.go:141] libmachine: Using API Version  1
I1206 18:53:38.002375   78998 main.go:141] libmachine: () Calling .SetConfigRaw
I1206 18:53:38.002744   78998 main.go:141] libmachine: () Calling .GetMachineName
I1206 18:53:38.002906   78998 main.go:141] libmachine: (functional-317483) Calling .GetState
I1206 18:53:38.004669   78998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1206 18:53:38.004706   78998 main.go:141] libmachine: Launching plugin server for driver kvm2
I1206 18:53:38.019296   78998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39849
I1206 18:53:38.019733   78998 main.go:141] libmachine: () Calling .GetVersion
I1206 18:53:38.020277   78998 main.go:141] libmachine: Using API Version  1
I1206 18:53:38.020298   78998 main.go:141] libmachine: () Calling .SetConfigRaw
I1206 18:53:38.020644   78998 main.go:141] libmachine: () Calling .GetMachineName
I1206 18:53:38.020861   78998 main.go:141] libmachine: (functional-317483) Calling .DriverName
I1206 18:53:38.021086   78998 ssh_runner.go:195] Run: systemctl --version
I1206 18:53:38.021120   78998 main.go:141] libmachine: (functional-317483) Calling .GetSSHHostname
I1206 18:53:38.023894   78998 main.go:141] libmachine: (functional-317483) DBG | domain functional-317483 has defined MAC address 52:54:00:f5:af:b4 in network mk-functional-317483
I1206 18:53:38.024389   78998 main.go:141] libmachine: (functional-317483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:af:b4", ip: ""} in network mk-functional-317483: {Iface:virbr1 ExpiryTime:2023-12-06 19:50:27 +0000 UTC Type:0 Mac:52:54:00:f5:af:b4 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:functional-317483 Clientid:01:52:54:00:f5:af:b4}
I1206 18:53:38.024427   78998 main.go:141] libmachine: (functional-317483) DBG | domain functional-317483 has defined IP address 192.168.39.65 and MAC address 52:54:00:f5:af:b4 in network mk-functional-317483
I1206 18:53:38.024666   78998 main.go:141] libmachine: (functional-317483) Calling .GetSSHPort
I1206 18:53:38.024857   78998 main.go:141] libmachine: (functional-317483) Calling .GetSSHKeyPath
I1206 18:53:38.025041   78998 main.go:141] libmachine: (functional-317483) Calling .GetSSHUsername
I1206 18:53:38.025163   78998 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/functional-317483/id_rsa Username:docker}
I1206 18:53:38.139677   78998 ssh_runner.go:195] Run: sudo crictl images --output json
I1206 18:53:38.215893   78998 main.go:141] libmachine: Making call to close driver server
I1206 18:53:38.215909   78998 main.go:141] libmachine: (functional-317483) Calling .Close
I1206 18:53:38.216216   78998 main.go:141] libmachine: Successfully made call to close driver server
I1206 18:53:38.216235   78998 main.go:141] libmachine: Making call to close connection to plugin binary
I1206 18:53:38.216251   78998 main.go:141] libmachine: Making call to close driver server
I1206 18:53:38.216269   78998 main.go:141] libmachine: (functional-317483) Calling .Close
I1206 18:53:38.216492   78998 main.go:141] libmachine: Successfully made call to close driver server
I1206 18:53:38.216511   78998 main.go:141] libmachine: Making call to close connection to plugin binary
I1206 18:53:38.216543   78998 main.go:141] libmachine: (functional-317483) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-317483 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| localhost/minikube-local-cache-test     | functional-317483  | 616b783b79855 | 3.34kB |
| registry.k8s.io/kube-proxy              | v1.28.4            | 83f6cc407eed8 | 74.7MB |
| docker.io/library/nginx                 | latest             | a6bd71f48f683 | 191MB  |
| docker.io/library/mysql                 | 5.7                | bdba757bc9336 | 520MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 73deb9a3f7025 | 295MB  |
| registry.k8s.io/kube-scheduler          | v1.28.4            | e3db313c6dbc0 | 61.6MB |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | c7d1297425461 | 65.3MB |
| registry.k8s.io/kube-apiserver          | v1.28.4            | 7fe0e6f37db33 | 127MB  |
| registry.k8s.io/kube-controller-manager | v1.28.4            | d058aa5ab969c | 123MB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-317483 image ls --format table --alsologtostderr:
I1206 18:53:39.216823   79135 out.go:296] Setting OutFile to fd 1 ...
I1206 18:53:39.217118   79135 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1206 18:53:39.217130   79135 out.go:309] Setting ErrFile to fd 2...
I1206 18:53:39.217134   79135 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1206 18:53:39.217327   79135 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17740-63652/.minikube/bin
I1206 18:53:39.217918   79135 config.go:182] Loaded profile config "functional-317483": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1206 18:53:39.218029   79135 config.go:182] Loaded profile config "functional-317483": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1206 18:53:39.218403   79135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1206 18:53:39.218451   79135 main.go:141] libmachine: Launching plugin server for driver kvm2
I1206 18:53:39.233770   79135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39799
I1206 18:53:39.234276   79135 main.go:141] libmachine: () Calling .GetVersion
I1206 18:53:39.234802   79135 main.go:141] libmachine: Using API Version  1
I1206 18:53:39.234827   79135 main.go:141] libmachine: () Calling .SetConfigRaw
I1206 18:53:39.235190   79135 main.go:141] libmachine: () Calling .GetMachineName
I1206 18:53:39.235413   79135 main.go:141] libmachine: (functional-317483) Calling .GetState
I1206 18:53:39.237506   79135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1206 18:53:39.237547   79135 main.go:141] libmachine: Launching plugin server for driver kvm2
I1206 18:53:39.251953   79135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45343
I1206 18:53:39.252414   79135 main.go:141] libmachine: () Calling .GetVersion
I1206 18:53:39.252913   79135 main.go:141] libmachine: Using API Version  1
I1206 18:53:39.252933   79135 main.go:141] libmachine: () Calling .SetConfigRaw
I1206 18:53:39.253312   79135 main.go:141] libmachine: () Calling .GetMachineName
I1206 18:53:39.253530   79135 main.go:141] libmachine: (functional-317483) Calling .DriverName
I1206 18:53:39.253747   79135 ssh_runner.go:195] Run: systemctl --version
I1206 18:53:39.253782   79135 main.go:141] libmachine: (functional-317483) Calling .GetSSHHostname
I1206 18:53:39.256596   79135 main.go:141] libmachine: (functional-317483) DBG | domain functional-317483 has defined MAC address 52:54:00:f5:af:b4 in network mk-functional-317483
I1206 18:53:39.257047   79135 main.go:141] libmachine: (functional-317483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:af:b4", ip: ""} in network mk-functional-317483: {Iface:virbr1 ExpiryTime:2023-12-06 19:50:27 +0000 UTC Type:0 Mac:52:54:00:f5:af:b4 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:functional-317483 Clientid:01:52:54:00:f5:af:b4}
I1206 18:53:39.257083   79135 main.go:141] libmachine: (functional-317483) DBG | domain functional-317483 has defined IP address 192.168.39.65 and MAC address 52:54:00:f5:af:b4 in network mk-functional-317483
I1206 18:53:39.257193   79135 main.go:141] libmachine: (functional-317483) Calling .GetSSHPort
I1206 18:53:39.257385   79135 main.go:141] libmachine: (functional-317483) Calling .GetSSHKeyPath
I1206 18:53:39.257562   79135 main.go:141] libmachine: (functional-317483) Calling .GetSSHUsername
I1206 18:53:39.257731   79135 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/functional-317483/id_rsa Username:docker}
I1206 18:53:39.360225   79135 ssh_runner.go:195] Run: sudo crictl images --output json
I1206 18:53:39.407934   79135 main.go:141] libmachine: Making call to close driver server
I1206 18:53:39.407967   79135 main.go:141] libmachine: (functional-317483) Calling .Close
I1206 18:53:39.408248   79135 main.go:141] libmachine: Successfully made call to close driver server
I1206 18:53:39.408270   79135 main.go:141] libmachine: Making call to close connection to plugin binary
I1206 18:53:39.408271   79135 main.go:141] libmachine: (functional-317483) DBG | Closing plugin on server side
I1206 18:53:39.408279   79135 main.go:141] libmachine: Making call to close driver server
I1206 18:53:39.408290   79135 main.go:141] libmachine: (functional-317483) Calling .Close
I1206 18:53:39.408504   79135 main.go:141] libmachine: Successfully made call to close driver server
I1206 18:53:39.408520   79135 main.go:141] libmachine: Making call to close connection to plugin binary
I1206 18:53:39.408537   79135 main.go:141] libmachine: (functional-317483) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-317483 image ls --format json --alsologtostderr:
[{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"bdba757bc9336a536d6884ecfaef00d24c1da3becd41e094eb226076436f258c","repoDigests":["docker.io/library/mysql@sha256:358b0482ced8103a8691c781e1cb6cd6b5a0b463a6dc0924a7ef357513ecc7a3","docker.io/library/mysql@sha256:f566819f2eee3a60cf5ea6c8b7d1bfc9de62e34268bf62dc34870c4fca8a85d1"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519653829"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"r
epoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/
dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"a6bd71f48f6839d9faae1f29d3babef831e76bc213107682c5cc80f0cbb30866","repoDigests":["docker.io/library/nginx@sha256:10d1f5b58f74683ad34eb29287e07dab1e90f10af243f151bb50aa5dbb4d62ee","docker.io/library/nginx@sha256:3c4c1f42a89e343c7b050c5e5d6f670a0e0b82e70e0e7d023f10092a04bbb5a7"],"repoTags":["docker.io/library/nginx:latest"],"size":"190960382"},{"id":"616b783b798551d7ec26ac07dde4820c8b7f3b20d3e39c51201ae112b19acdd1","repoDigests":["localhost/minikube-local-cache-test@sha256:33b412abb94b61ff8d91f4adc2ca847672e4274b7d25e2e087935836428a1fc7"],"repoTags":["localhost/minikube-local-cache-test:functional-317483"],"size":"3343"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":["registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200
e3387c66c8a1e84f7222c85499","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"127226832"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c","registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"123261750"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":["registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"74749335"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153f
b0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"65258016"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":["registr
y.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba","registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"61551410"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:6c54bcd6cf6de
7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"295456551"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-317483 image ls --format json --alsologtostderr:
I1206 18:53:39.091495   79111 out.go:296] Setting OutFile to fd 1 ...
I1206 18:53:39.091626   79111 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1206 18:53:39.091635   79111 out.go:309] Setting ErrFile to fd 2...
I1206 18:53:39.091640   79111 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1206 18:53:39.091861   79111 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17740-63652/.minikube/bin
I1206 18:53:39.092464   79111 config.go:182] Loaded profile config "functional-317483": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1206 18:53:39.092566   79111 config.go:182] Loaded profile config "functional-317483": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1206 18:53:39.093021   79111 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1206 18:53:39.093089   79111 main.go:141] libmachine: Launching plugin server for driver kvm2
I1206 18:53:39.108095   79111 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38801
I1206 18:53:39.108543   79111 main.go:141] libmachine: () Calling .GetVersion
I1206 18:53:39.109212   79111 main.go:141] libmachine: Using API Version  1
I1206 18:53:39.109263   79111 main.go:141] libmachine: () Calling .SetConfigRaw
I1206 18:53:39.109642   79111 main.go:141] libmachine: () Calling .GetMachineName
I1206 18:53:39.109861   79111 main.go:141] libmachine: (functional-317483) Calling .GetState
I1206 18:53:39.111934   79111 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1206 18:53:39.111992   79111 main.go:141] libmachine: Launching plugin server for driver kvm2
I1206 18:53:39.127165   79111 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41873
I1206 18:53:39.127573   79111 main.go:141] libmachine: () Calling .GetVersion
I1206 18:53:39.128199   79111 main.go:141] libmachine: Using API Version  1
I1206 18:53:39.128234   79111 main.go:141] libmachine: () Calling .SetConfigRaw
I1206 18:53:39.128576   79111 main.go:141] libmachine: () Calling .GetMachineName
I1206 18:53:39.128751   79111 main.go:141] libmachine: (functional-317483) Calling .DriverName
I1206 18:53:39.129002   79111 ssh_runner.go:195] Run: systemctl --version
I1206 18:53:39.129032   79111 main.go:141] libmachine: (functional-317483) Calling .GetSSHHostname
I1206 18:53:39.132028   79111 main.go:141] libmachine: (functional-317483) DBG | domain functional-317483 has defined MAC address 52:54:00:f5:af:b4 in network mk-functional-317483
I1206 18:53:39.132563   79111 main.go:141] libmachine: (functional-317483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:af:b4", ip: ""} in network mk-functional-317483: {Iface:virbr1 ExpiryTime:2023-12-06 19:50:27 +0000 UTC Type:0 Mac:52:54:00:f5:af:b4 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:functional-317483 Clientid:01:52:54:00:f5:af:b4}
I1206 18:53:39.132598   79111 main.go:141] libmachine: (functional-317483) DBG | domain functional-317483 has defined IP address 192.168.39.65 and MAC address 52:54:00:f5:af:b4 in network mk-functional-317483
I1206 18:53:39.132770   79111 main.go:141] libmachine: (functional-317483) Calling .GetSSHPort
I1206 18:53:39.132944   79111 main.go:141] libmachine: (functional-317483) Calling .GetSSHKeyPath
I1206 18:53:39.133074   79111 main.go:141] libmachine: (functional-317483) Calling .GetSSHUsername
I1206 18:53:39.133266   79111 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/functional-317483/id_rsa Username:docker}
I1206 18:53:39.233329   79111 ssh_runner.go:195] Run: sudo crictl images --output json
I1206 18:53:39.280204   79111 main.go:141] libmachine: Making call to close driver server
I1206 18:53:39.280216   79111 main.go:141] libmachine: (functional-317483) Calling .Close
I1206 18:53:39.280536   79111 main.go:141] libmachine: Successfully made call to close driver server
I1206 18:53:39.280560   79111 main.go:141] libmachine: Making call to close connection to plugin binary
I1206 18:53:39.280582   79111 main.go:141] libmachine: Making call to close driver server
I1206 18:53:39.280597   79111 main.go:141] libmachine: (functional-317483) Calling .Close
I1206 18:53:39.280848   79111 main.go:141] libmachine: Successfully made call to close driver server
I1206 18:53:39.280860   79111 main.go:141] libmachine: (functional-317483) DBG | Closing plugin on server side
I1206 18:53:39.280868   79111 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-317483 image ls --format yaml --alsologtostderr:
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: bdba757bc9336a536d6884ecfaef00d24c1da3becd41e094eb226076436f258c
repoDigests:
- docker.io/library/mysql@sha256:358b0482ced8103a8691c781e1cb6cd6b5a0b463a6dc0924a7ef357513ecc7a3
- docker.io/library/mysql@sha256:f566819f2eee3a60cf5ea6c8b7d1bfc9de62e34268bf62dc34870c4fca8a85d1
repoTags:
- docker.io/library/mysql:5.7
size: "519653829"
- id: a6bd71f48f6839d9faae1f29d3babef831e76bc213107682c5cc80f0cbb30866
repoDigests:
- docker.io/library/nginx@sha256:10d1f5b58f74683ad34eb29287e07dab1e90f10af243f151bb50aa5dbb4d62ee
- docker.io/library/nginx@sha256:3c4c1f42a89e343c7b050c5e5d6f670a0e0b82e70e0e7d023f10092a04bbb5a7
repoTags:
- docker.io/library/nginx:latest
size: "190960382"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "295456551"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "74749335"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
- registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "123261750"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 616b783b798551d7ec26ac07dde4820c8b7f3b20d3e39c51201ae112b19acdd1
repoDigests:
- localhost/minikube-local-cache-test@sha256:33b412abb94b61ff8d91f4adc2ca847672e4274b7d25e2e087935836428a1fc7
repoTags:
- localhost/minikube-local-cache-test:functional-317483
size: "3343"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "127226832"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "65258016"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
- registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "61551410"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-317483 image ls --format yaml --alsologtostderr:
I1206 18:53:38.289207   79022 out.go:296] Setting OutFile to fd 1 ...
I1206 18:53:38.289538   79022 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1206 18:53:38.289548   79022 out.go:309] Setting ErrFile to fd 2...
I1206 18:53:38.289552   79022 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1206 18:53:38.289768   79022 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17740-63652/.minikube/bin
I1206 18:53:38.290424   79022 config.go:182] Loaded profile config "functional-317483": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1206 18:53:38.290526   79022 config.go:182] Loaded profile config "functional-317483": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1206 18:53:38.291087   79022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1206 18:53:38.291152   79022 main.go:141] libmachine: Launching plugin server for driver kvm2
I1206 18:53:38.305967   79022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36043
I1206 18:53:38.306447   79022 main.go:141] libmachine: () Calling .GetVersion
I1206 18:53:38.307038   79022 main.go:141] libmachine: Using API Version  1
I1206 18:53:38.307067   79022 main.go:141] libmachine: () Calling .SetConfigRaw
I1206 18:53:38.307390   79022 main.go:141] libmachine: () Calling .GetMachineName
I1206 18:53:38.307593   79022 main.go:141] libmachine: (functional-317483) Calling .GetState
I1206 18:53:38.309565   79022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1206 18:53:38.309630   79022 main.go:141] libmachine: Launching plugin server for driver kvm2
I1206 18:53:38.324243   79022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33807
I1206 18:53:38.324692   79022 main.go:141] libmachine: () Calling .GetVersion
I1206 18:53:38.325147   79022 main.go:141] libmachine: Using API Version  1
I1206 18:53:38.325172   79022 main.go:141] libmachine: () Calling .SetConfigRaw
I1206 18:53:38.325539   79022 main.go:141] libmachine: () Calling .GetMachineName
I1206 18:53:38.325825   79022 main.go:141] libmachine: (functional-317483) Calling .DriverName
I1206 18:53:38.326098   79022 ssh_runner.go:195] Run: systemctl --version
I1206 18:53:38.326130   79022 main.go:141] libmachine: (functional-317483) Calling .GetSSHHostname
I1206 18:53:38.329219   79022 main.go:141] libmachine: (functional-317483) DBG | domain functional-317483 has defined MAC address 52:54:00:f5:af:b4 in network mk-functional-317483
I1206 18:53:38.329652   79022 main.go:141] libmachine: (functional-317483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:af:b4", ip: ""} in network mk-functional-317483: {Iface:virbr1 ExpiryTime:2023-12-06 19:50:27 +0000 UTC Type:0 Mac:52:54:00:f5:af:b4 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:functional-317483 Clientid:01:52:54:00:f5:af:b4}
I1206 18:53:38.329689   79022 main.go:141] libmachine: (functional-317483) DBG | domain functional-317483 has defined IP address 192.168.39.65 and MAC address 52:54:00:f5:af:b4 in network mk-functional-317483
I1206 18:53:38.329782   79022 main.go:141] libmachine: (functional-317483) Calling .GetSSHPort
I1206 18:53:38.329957   79022 main.go:141] libmachine: (functional-317483) Calling .GetSSHKeyPath
I1206 18:53:38.330086   79022 main.go:141] libmachine: (functional-317483) Calling .GetSSHUsername
I1206 18:53:38.330282   79022 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/functional-317483/id_rsa Username:docker}
I1206 18:53:38.440234   79022 ssh_runner.go:195] Run: sudo crictl images --output json
I1206 18:53:38.504696   79022 main.go:141] libmachine: Making call to close driver server
I1206 18:53:38.504716   79022 main.go:141] libmachine: (functional-317483) Calling .Close
I1206 18:53:38.505007   79022 main.go:141] libmachine: Successfully made call to close driver server
I1206 18:53:38.505024   79022 main.go:141] libmachine: Making call to close connection to plugin binary
I1206 18:53:38.505034   79022 main.go:141] libmachine: Making call to close driver server
I1206 18:53:38.505043   79022 main.go:141] libmachine: (functional-317483) Calling .Close
I1206 18:53:38.505283   79022 main.go:141] libmachine: Successfully made call to close driver server
I1206 18:53:38.505299   79022 main.go:141] libmachine: (functional-317483) DBG | Closing plugin on server side
I1206 18:53:38.505302   79022 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 ssh pgrep buildkitd
2023/12/06 18:53:38 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-317483 ssh pgrep buildkitd: exit status 1 (228.416747ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 image build -t localhost/my-image:functional-317483 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-317483 image build -t localhost/my-image:functional-317483 testdata/build --alsologtostderr: (2.217852209s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-317483 image build -t localhost/my-image:functional-317483 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> d6a3ece1124
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-317483
--> 64667316fd2
Successfully tagged localhost/my-image:functional-317483
64667316fd2cfe066477ac2cadc19abecc7e3d94d48fdca57c106a45a8204de8
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-317483 image build -t localhost/my-image:functional-317483 testdata/build --alsologtostderr:
I1206 18:53:38.813083   79076 out.go:296] Setting OutFile to fd 1 ...
I1206 18:53:38.813250   79076 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1206 18:53:38.813262   79076 out.go:309] Setting ErrFile to fd 2...
I1206 18:53:38.813270   79076 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1206 18:53:38.813544   79076 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17740-63652/.minikube/bin
I1206 18:53:38.814390   79076 config.go:182] Loaded profile config "functional-317483": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1206 18:53:38.814951   79076 config.go:182] Loaded profile config "functional-317483": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1206 18:53:38.815344   79076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1206 18:53:38.815401   79076 main.go:141] libmachine: Launching plugin server for driver kvm2
I1206 18:53:38.830062   79076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39005
I1206 18:53:38.830591   79076 main.go:141] libmachine: () Calling .GetVersion
I1206 18:53:38.831299   79076 main.go:141] libmachine: Using API Version  1
I1206 18:53:38.831328   79076 main.go:141] libmachine: () Calling .SetConfigRaw
I1206 18:53:38.831725   79076 main.go:141] libmachine: () Calling .GetMachineName
I1206 18:53:38.831961   79076 main.go:141] libmachine: (functional-317483) Calling .GetState
I1206 18:53:38.834062   79076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1206 18:53:38.834101   79076 main.go:141] libmachine: Launching plugin server for driver kvm2
I1206 18:53:38.849805   79076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43023
I1206 18:53:38.850222   79076 main.go:141] libmachine: () Calling .GetVersion
I1206 18:53:38.850722   79076 main.go:141] libmachine: Using API Version  1
I1206 18:53:38.850743   79076 main.go:141] libmachine: () Calling .SetConfigRaw
I1206 18:53:38.851138   79076 main.go:141] libmachine: () Calling .GetMachineName
I1206 18:53:38.851292   79076 main.go:141] libmachine: (functional-317483) Calling .DriverName
I1206 18:53:38.851491   79076 ssh_runner.go:195] Run: systemctl --version
I1206 18:53:38.851516   79076 main.go:141] libmachine: (functional-317483) Calling .GetSSHHostname
I1206 18:53:38.854735   79076 main.go:141] libmachine: (functional-317483) DBG | domain functional-317483 has defined MAC address 52:54:00:f5:af:b4 in network mk-functional-317483
I1206 18:53:38.855287   79076 main.go:141] libmachine: (functional-317483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:af:b4", ip: ""} in network mk-functional-317483: {Iface:virbr1 ExpiryTime:2023-12-06 19:50:27 +0000 UTC Type:0 Mac:52:54:00:f5:af:b4 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:functional-317483 Clientid:01:52:54:00:f5:af:b4}
I1206 18:53:38.855306   79076 main.go:141] libmachine: (functional-317483) DBG | domain functional-317483 has defined IP address 192.168.39.65 and MAC address 52:54:00:f5:af:b4 in network mk-functional-317483
I1206 18:53:38.855573   79076 main.go:141] libmachine: (functional-317483) Calling .GetSSHPort
I1206 18:53:38.855730   79076 main.go:141] libmachine: (functional-317483) Calling .GetSSHKeyPath
I1206 18:53:38.855891   79076 main.go:141] libmachine: (functional-317483) Calling .GetSSHUsername
I1206 18:53:38.855992   79076 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/functional-317483/id_rsa Username:docker}
I1206 18:53:38.962827   79076 build_images.go:151] Building image from path: /tmp/build.3638177841.tar
I1206 18:53:38.962897   79076 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1206 18:53:38.994049   79076 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3638177841.tar
I1206 18:53:39.005466   79076 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3638177841.tar: stat -c "%s %y" /var/lib/minikube/build/build.3638177841.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3638177841.tar': No such file or directory
I1206 18:53:39.005548   79076 ssh_runner.go:362] scp /tmp/build.3638177841.tar --> /var/lib/minikube/build/build.3638177841.tar (3072 bytes)
I1206 18:53:39.041542   79076 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3638177841
I1206 18:53:39.054141   79076 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3638177841 -xf /var/lib/minikube/build/build.3638177841.tar
I1206 18:53:39.065484   79076 crio.go:297] Building image: /var/lib/minikube/build/build.3638177841
I1206 18:53:39.065548   79076 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-317483 /var/lib/minikube/build/build.3638177841 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1206 18:53:40.932694   79076 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-317483 /var/lib/minikube/build/build.3638177841 --cgroup-manager=cgroupfs: (1.867119117s)
I1206 18:53:40.932769   79076 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3638177841
I1206 18:53:40.942978   79076 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3638177841.tar
I1206 18:53:40.951624   79076 build_images.go:207] Built localhost/my-image:functional-317483 from /tmp/build.3638177841.tar
I1206 18:53:40.951660   79076 build_images.go:123] succeeded building to: functional-317483
I1206 18:53:40.951667   79076 build_images.go:124] failed building to: 
I1206 18:53:40.951737   79076 main.go:141] libmachine: Making call to close driver server
I1206 18:53:40.951760   79076 main.go:141] libmachine: (functional-317483) Calling .Close
I1206 18:53:40.952078   79076 main.go:141] libmachine: Successfully made call to close driver server
I1206 18:53:40.952098   79076 main.go:141] libmachine: Making call to close connection to plugin binary
I1206 18:53:40.952108   79076 main.go:141] libmachine: Making call to close driver server
I1206 18:53:40.952116   79076 main.go:141] libmachine: (functional-317483) Calling .Close
I1206 18:53:40.953666   79076 main.go:141] libmachine: (functional-317483) DBG | Closing plugin on server side
I1206 18:53:40.953689   79076 main.go:141] libmachine: Successfully made call to close driver server
I1206 18:53:40.953728   79076 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 image ls
E1206 18:53:43.139028   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/addons-463584/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-317483
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 image load --daemon gcr.io/google-containers/addon-resizer:functional-317483 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-317483 image load --daemon gcr.io/google-containers/addon-resizer:functional-317483 --alsologtostderr: (5.345947361s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 image load --daemon gcr.io/google-containers/addon-resizer:functional-317483 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-317483 image load --daemon gcr.io/google-containers/addon-resizer:functional-317483 --alsologtostderr: (2.411672058s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.66s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 service list -o json
functional_test.go:1493: Took "354.566391ms" to run "out/minikube-linux-amd64 -p functional-317483 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.39.65:30135
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.39.65:30135
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "356.002407ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "74.861911ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (23.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-317483 /tmp/TestFunctionalparallelMountCmdany-port3868673140/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1701888789694607212" to /tmp/TestFunctionalparallelMountCmdany-port3868673140/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1701888789694607212" to /tmp/TestFunctionalparallelMountCmdany-port3868673140/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1701888789694607212" to /tmp/TestFunctionalparallelMountCmdany-port3868673140/001/test-1701888789694607212
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-317483 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (308.146122ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  6 18:53 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  6 18:53 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  6 18:53 test-1701888789694607212
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 ssh cat /mount-9p/test-1701888789694607212
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-317483 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [83ab8aaf-eb70-468a-a13f-2d8c98fad815] Pending
helpers_test.go:344: "busybox-mount" [83ab8aaf-eb70-468a-a13f-2d8c98fad815] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [83ab8aaf-eb70-468a-a13f-2d8c98fad815] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [83ab8aaf-eb70-468a-a13f-2d8c98fad815] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 20.089780937s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-317483 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-317483 /tmp/TestFunctionalparallelMountCmdany-port3868673140/001:/mount-9p --alsologtostderr -v=1] ...
E1206 18:53:32.898091   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/addons-463584/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/MountCmd/any-port (23.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "327.366957ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "68.109926ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 image rm gcr.io/google-containers/addon-resizer:functional-317483 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-317483
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 image save --daemon gcr.io/google-containers/addon-resizer:functional-317483 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-317483
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-317483 /tmp/TestFunctionalparallelMountCmdspecific-port1450605880/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-317483 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (263.469372ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-317483 /tmp/TestFunctionalparallelMountCmdspecific-port1450605880/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-317483 ssh "sudo umount -f /mount-9p": exit status 1 (220.570349ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-317483 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-317483 /tmp/TestFunctionalparallelMountCmdspecific-port1450605880/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-317483 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3408553415/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-317483 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3408553415/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-317483 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3408553415/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-317483 ssh "findmnt -T" /mount1: exit status 1 (321.130909ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-317483 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-317483 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-317483 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3408553415/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-317483 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3408553415/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-317483 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3408553415/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.52s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-317483
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-317483
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-317483
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (110.79s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-283223 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E1206 18:54:03.620182   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/addons-463584/client.crt: no such file or directory
E1206 18:54:44.580415   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/addons-463584/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-283223 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m50.790604814s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (110.79s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (14.08s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-283223 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-283223 addons enable ingress --alsologtostderr -v=5: (14.079018914s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (14.08s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.65s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-283223 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.65s)

                                                
                                    
x
+
TestJSONOutput/start/Command (60.74s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-755249 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E1206 18:58:50.342243   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/addons-463584/client.crt: no such file or directory
E1206 18:59:16.556714   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/functional-317483/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-755249 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m0.735386916s)
--- PASS: TestJSONOutput/start/Command (60.74s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.73s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-755249 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.73s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-755249 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (9.11s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-755249 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-755249 --output=json --user=testUser: (9.112022954s)
--- PASS: TestJSONOutput/stop/Command (9.11s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-048757 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-048757 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (83.423332ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b99715fa-5e66-4e24-a754-b839c2dbcd76","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-048757] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f501b1de-fe6c-4304-b3f0-4ba2d90b6e1c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17740"}}
	{"specversion":"1.0","id":"1157ee72-dcfb-4785-ba75-28e4ec8b8250","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f06560a7-40fc-4aa3-9def-a667f79c7e16","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17740-63652/kubeconfig"}}
	{"specversion":"1.0","id":"75e74227-485f-44e0-9746-c7c929c9b3dc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17740-63652/.minikube"}}
	{"specversion":"1.0","id":"ce331a4e-e76c-4a2b-950c-74d5e942d94a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"920641c3-7000-4313-9df8-466ea62a026b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"88bb3336-a56c-4c28-a0a3-194f8294f7db","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-048757" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-048757
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (96.13s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-961677 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-961677 --driver=kvm2  --container-runtime=crio: (46.834153776s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-964136 --driver=kvm2  --container-runtime=crio
E1206 19:00:38.477408   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/functional-317483/client.crt: no such file or directory
E1206 19:00:51.525717   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/client.crt: no such file or directory
E1206 19:00:51.531044   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/client.crt: no such file or directory
E1206 19:00:51.541300   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/client.crt: no such file or directory
E1206 19:00:51.561599   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/client.crt: no such file or directory
E1206 19:00:51.601953   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/client.crt: no such file or directory
E1206 19:00:51.682322   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/client.crt: no such file or directory
E1206 19:00:51.842814   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/client.crt: no such file or directory
E1206 19:00:52.163393   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/client.crt: no such file or directory
E1206 19:00:52.804398   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/client.crt: no such file or directory
E1206 19:00:54.084962   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/client.crt: no such file or directory
E1206 19:00:56.645355   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/client.crt: no such file or directory
E1206 19:01:01.765913   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/client.crt: no such file or directory
E1206 19:01:12.006721   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-964136 --driver=kvm2  --container-runtime=crio: (46.763509056s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-961677
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-964136
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-964136" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-964136
helpers_test.go:175: Cleaning up "first-961677" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-961677
--- PASS: TestMinikubeProfile (96.13s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.07s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-090770 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E1206 19:01:32.487030   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-090770 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.072882161s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.07s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-090770 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-090770 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.40s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (27.29s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-112283 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E1206 19:02:13.447219   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-112283 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.289809787s)
--- PASS: TestMountStart/serial/StartWithMountSecond (27.29s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-112283 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-112283 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.41s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-090770 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-112283 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-112283 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-112283
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-112283: (1.21932685s)
--- PASS: TestMountStart/serial/Stop (1.22s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (21.65s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-112283
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-112283: (20.64968425s)
--- PASS: TestMountStart/serial/RestartStopped (21.65s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-112283 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-112283 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.42s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (112.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-593099 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1206 19:02:54.631981   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/functional-317483/client.crt: no such file or directory
E1206 19:03:22.317727   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/functional-317483/client.crt: no such file or directory
E1206 19:03:22.657304   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/addons-463584/client.crt: no such file or directory
E1206 19:03:35.368411   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p multinode-593099 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m52.155331981s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-amd64 -p multinode-593099 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (112.59s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-593099 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-593099 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-593099 -- rollout status deployment/busybox: (2.510719298s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-593099 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-593099 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-593099 -- exec busybox-5bc68d56bd-shdgj -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-593099 -- exec busybox-5bc68d56bd-x24l4 -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-593099 -- exec busybox-5bc68d56bd-shdgj -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-593099 -- exec busybox-5bc68d56bd-x24l4 -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-593099 -- exec busybox-5bc68d56bd-shdgj -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-593099 -- exec busybox-5bc68d56bd-x24l4 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.60s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (42.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-593099 -v 3 --alsologtostderr
multinode_test.go:111: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-593099 -v 3 --alsologtostderr: (41.515977476s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-amd64 -p multinode-593099 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (42.14s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-593099 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p multinode-593099 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-593099 cp testdata/cp-test.txt multinode-593099:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-593099 ssh -n multinode-593099 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-593099 cp multinode-593099:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile136929012/001/cp-test_multinode-593099.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-593099 ssh -n multinode-593099 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-593099 cp multinode-593099:/home/docker/cp-test.txt multinode-593099-m02:/home/docker/cp-test_multinode-593099_multinode-593099-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-593099 ssh -n multinode-593099 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-593099 ssh -n multinode-593099-m02 "sudo cat /home/docker/cp-test_multinode-593099_multinode-593099-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-593099 cp multinode-593099:/home/docker/cp-test.txt multinode-593099-m03:/home/docker/cp-test_multinode-593099_multinode-593099-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-593099 ssh -n multinode-593099 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-593099 ssh -n multinode-593099-m03 "sudo cat /home/docker/cp-test_multinode-593099_multinode-593099-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-593099 cp testdata/cp-test.txt multinode-593099-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-593099 ssh -n multinode-593099-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-593099 cp multinode-593099-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile136929012/001/cp-test_multinode-593099-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-593099 ssh -n multinode-593099-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-593099 cp multinode-593099-m02:/home/docker/cp-test.txt multinode-593099:/home/docker/cp-test_multinode-593099-m02_multinode-593099.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-593099 ssh -n multinode-593099-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-593099 ssh -n multinode-593099 "sudo cat /home/docker/cp-test_multinode-593099-m02_multinode-593099.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-593099 cp multinode-593099-m02:/home/docker/cp-test.txt multinode-593099-m03:/home/docker/cp-test_multinode-593099-m02_multinode-593099-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-593099 ssh -n multinode-593099-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-593099 ssh -n multinode-593099-m03 "sudo cat /home/docker/cp-test_multinode-593099-m02_multinode-593099-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-593099 cp testdata/cp-test.txt multinode-593099-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-593099 ssh -n multinode-593099-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-593099 cp multinode-593099-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile136929012/001/cp-test_multinode-593099-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-593099 ssh -n multinode-593099-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-593099 cp multinode-593099-m03:/home/docker/cp-test.txt multinode-593099:/home/docker/cp-test_multinode-593099-m03_multinode-593099.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-593099 ssh -n multinode-593099-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-593099 ssh -n multinode-593099 "sudo cat /home/docker/cp-test_multinode-593099-m03_multinode-593099.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-593099 cp multinode-593099-m03:/home/docker/cp-test.txt multinode-593099-m02:/home/docker/cp-test_multinode-593099-m03_multinode-593099-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-593099 ssh -n multinode-593099-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-593099 ssh -n multinode-593099-m02 "sudo cat /home/docker/cp-test_multinode-593099-m03_multinode-593099-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.84s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p multinode-593099 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-amd64 -p multinode-593099 node stop m03: (2.095690521s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p multinode-593099 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-593099 status: exit status 7 (464.164825ms)

                                                
                                                
-- stdout --
	multinode-593099
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-593099-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-593099-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-amd64 -p multinode-593099 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-593099 status --alsologtostderr: exit status 7 (447.367401ms)

                                                
                                                
-- stdout --
	multinode-593099
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-593099-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-593099-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 19:05:39.420093   85962 out.go:296] Setting OutFile to fd 1 ...
	I1206 19:05:39.420355   85962 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 19:05:39.420364   85962 out.go:309] Setting ErrFile to fd 2...
	I1206 19:05:39.420368   85962 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 19:05:39.420539   85962 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17740-63652/.minikube/bin
	I1206 19:05:39.420698   85962 out.go:303] Setting JSON to false
	I1206 19:05:39.420736   85962 mustload.go:65] Loading cluster: multinode-593099
	I1206 19:05:39.420792   85962 notify.go:220] Checking for updates...
	I1206 19:05:39.421085   85962 config.go:182] Loaded profile config "multinode-593099": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 19:05:39.421099   85962 status.go:255] checking status of multinode-593099 ...
	I1206 19:05:39.421627   85962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:05:39.421692   85962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:05:39.441379   85962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37279
	I1206 19:05:39.441832   85962 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:05:39.442385   85962 main.go:141] libmachine: Using API Version  1
	I1206 19:05:39.442409   85962 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:05:39.442774   85962 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:05:39.442970   85962 main.go:141] libmachine: (multinode-593099) Calling .GetState
	I1206 19:05:39.444512   85962 status.go:330] multinode-593099 host status = "Running" (err=<nil>)
	I1206 19:05:39.444530   85962 host.go:66] Checking if "multinode-593099" exists ...
	I1206 19:05:39.444811   85962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:05:39.444845   85962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:05:39.459654   85962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38143
	I1206 19:05:39.460058   85962 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:05:39.460445   85962 main.go:141] libmachine: Using API Version  1
	I1206 19:05:39.460471   85962 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:05:39.460790   85962 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:05:39.460946   85962 main.go:141] libmachine: (multinode-593099) Calling .GetIP
	I1206 19:05:39.463754   85962 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:05:39.464194   85962 main.go:141] libmachine: (multinode-593099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:c6", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:03:01 +0000 UTC Type:0 Mac:52:54:00:37:16:c6 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:multinode-593099 Clientid:01:52:54:00:37:16:c6}
	I1206 19:05:39.464216   85962 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined IP address 192.168.39.125 and MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:05:39.464392   85962 host.go:66] Checking if "multinode-593099" exists ...
	I1206 19:05:39.464774   85962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:05:39.464833   85962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:05:39.483077   85962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42645
	I1206 19:05:39.483538   85962 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:05:39.484012   85962 main.go:141] libmachine: Using API Version  1
	I1206 19:05:39.484039   85962 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:05:39.484326   85962 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:05:39.484536   85962 main.go:141] libmachine: (multinode-593099) Calling .DriverName
	I1206 19:05:39.484810   85962 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 19:05:39.484834   85962 main.go:141] libmachine: (multinode-593099) Calling .GetSSHHostname
	I1206 19:05:39.487505   85962 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:05:39.488008   85962 main.go:141] libmachine: (multinode-593099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:16:c6", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:03:01 +0000 UTC Type:0 Mac:52:54:00:37:16:c6 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:multinode-593099 Clientid:01:52:54:00:37:16:c6}
	I1206 19:05:39.488046   85962 main.go:141] libmachine: (multinode-593099) DBG | domain multinode-593099 has defined IP address 192.168.39.125 and MAC address 52:54:00:37:16:c6 in network mk-multinode-593099
	I1206 19:05:39.488112   85962 main.go:141] libmachine: (multinode-593099) Calling .GetSSHPort
	I1206 19:05:39.488274   85962 main.go:141] libmachine: (multinode-593099) Calling .GetSSHKeyPath
	I1206 19:05:39.488463   85962 main.go:141] libmachine: (multinode-593099) Calling .GetSSHUsername
	I1206 19:05:39.488615   85962 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/multinode-593099/id_rsa Username:docker}
	I1206 19:05:39.577727   85962 ssh_runner.go:195] Run: systemctl --version
	I1206 19:05:39.583534   85962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 19:05:39.596892   85962 kubeconfig.go:92] found "multinode-593099" server: "https://192.168.39.125:8443"
	I1206 19:05:39.596923   85962 api_server.go:166] Checking apiserver status ...
	I1206 19:05:39.596957   85962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:05:39.608391   85962 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1068/cgroup
	I1206 19:05:39.616382   85962 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/pod6290493e5e32b3d1986ab88f381ba97f/crio-d12dc683d1dba1543fc803ce878089f2d82893ac8cf6ddfd54be3345f2651af3"
	I1206 19:05:39.616433   85962 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod6290493e5e32b3d1986ab88f381ba97f/crio-d12dc683d1dba1543fc803ce878089f2d82893ac8cf6ddfd54be3345f2651af3/freezer.state
	I1206 19:05:39.624908   85962 api_server.go:204] freezer state: "THAWED"
	I1206 19:05:39.624930   85962 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I1206 19:05:39.630957   85962 api_server.go:279] https://192.168.39.125:8443/healthz returned 200:
	ok
	I1206 19:05:39.630985   85962 status.go:421] multinode-593099 apiserver status = Running (err=<nil>)
	I1206 19:05:39.630996   85962 status.go:257] multinode-593099 status: &{Name:multinode-593099 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 19:05:39.631011   85962 status.go:255] checking status of multinode-593099-m02 ...
	I1206 19:05:39.631361   85962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:05:39.631400   85962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:05:39.646715   85962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39587
	I1206 19:05:39.647181   85962 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:05:39.647663   85962 main.go:141] libmachine: Using API Version  1
	I1206 19:05:39.647685   85962 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:05:39.648025   85962 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:05:39.648246   85962 main.go:141] libmachine: (multinode-593099-m02) Calling .GetState
	I1206 19:05:39.649874   85962 status.go:330] multinode-593099-m02 host status = "Running" (err=<nil>)
	I1206 19:05:39.649896   85962 host.go:66] Checking if "multinode-593099-m02" exists ...
	I1206 19:05:39.650303   85962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:05:39.650351   85962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:05:39.666945   85962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37535
	I1206 19:05:39.667372   85962 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:05:39.667829   85962 main.go:141] libmachine: Using API Version  1
	I1206 19:05:39.667856   85962 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:05:39.668159   85962 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:05:39.668356   85962 main.go:141] libmachine: (multinode-593099-m02) Calling .GetIP
	I1206 19:05:39.671416   85962 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:05:39.671915   85962 main.go:141] libmachine: (multinode-593099-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:67:33", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:04:08 +0000 UTC Type:0 Mac:52:54:00:49:67:33 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-593099-m02 Clientid:01:52:54:00:49:67:33}
	I1206 19:05:39.671947   85962 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:05:39.672116   85962 host.go:66] Checking if "multinode-593099-m02" exists ...
	I1206 19:05:39.672463   85962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:05:39.672503   85962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:05:39.687249   85962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38145
	I1206 19:05:39.687655   85962 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:05:39.688090   85962 main.go:141] libmachine: Using API Version  1
	I1206 19:05:39.688112   85962 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:05:39.688452   85962 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:05:39.688627   85962 main.go:141] libmachine: (multinode-593099-m02) Calling .DriverName
	I1206 19:05:39.688801   85962 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 19:05:39.688819   85962 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHHostname
	I1206 19:05:39.691337   85962 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:05:39.691704   85962 main.go:141] libmachine: (multinode-593099-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:67:33", ip: ""} in network mk-multinode-593099: {Iface:virbr1 ExpiryTime:2023-12-06 20:04:08 +0000 UTC Type:0 Mac:52:54:00:49:67:33 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-593099-m02 Clientid:01:52:54:00:49:67:33}
	I1206 19:05:39.691741   85962 main.go:141] libmachine: (multinode-593099-m02) DBG | domain multinode-593099-m02 has defined IP address 192.168.39.6 and MAC address 52:54:00:49:67:33 in network mk-multinode-593099
	I1206 19:05:39.691804   85962 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHPort
	I1206 19:05:39.691955   85962 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHKeyPath
	I1206 19:05:39.692076   85962 main.go:141] libmachine: (multinode-593099-m02) Calling .GetSSHUsername
	I1206 19:05:39.692200   85962 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17740-63652/.minikube/machines/multinode-593099-m02/id_rsa Username:docker}
	I1206 19:05:39.776604   85962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 19:05:39.789944   85962 status.go:257] multinode-593099-m02 status: &{Name:multinode-593099-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1206 19:05:39.789989   85962 status.go:255] checking status of multinode-593099-m03 ...
	I1206 19:05:39.790422   85962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1206 19:05:39.790477   85962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1206 19:05:39.805046   85962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36997
	I1206 19:05:39.805482   85962 main.go:141] libmachine: () Calling .GetVersion
	I1206 19:05:39.805955   85962 main.go:141] libmachine: Using API Version  1
	I1206 19:05:39.805982   85962 main.go:141] libmachine: () Calling .SetConfigRaw
	I1206 19:05:39.806308   85962 main.go:141] libmachine: () Calling .GetMachineName
	I1206 19:05:39.806474   85962 main.go:141] libmachine: (multinode-593099-m03) Calling .GetState
	I1206 19:05:39.807962   85962 status.go:330] multinode-593099-m03 host status = "Stopped" (err=<nil>)
	I1206 19:05:39.807978   85962 status.go:343] host is not running, skipping remaining checks
	I1206 19:05:39.807986   85962 status.go:257] multinode-593099-m03 status: &{Name:multinode-593099-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.01s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (29.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-593099 node start m03 --alsologtostderr
E1206 19:05:51.525338   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/client.crt: no such file or directory
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-593099 node start m03 --alsologtostderr: (29.244835882s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p multinode-593099 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (29.90s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-593099 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p multinode-593099 node delete m03: (1.225612098s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p multinode-593099 status --alsologtostderr
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.78s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (445.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-593099 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1206 19:20:51.525716   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/client.crt: no such file or directory
E1206 19:22:54.632472   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/functional-317483/client.crt: no such file or directory
E1206 19:23:22.657393   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/addons-463584/client.crt: no such file or directory
E1206 19:25:51.525386   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/client.crt: no such file or directory
E1206 19:26:25.703991   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/addons-463584/client.crt: no such file or directory
multinode_test.go:382: (dbg) Done: out/minikube-linux-amd64 start -p multinode-593099 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (7m24.790537856s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p multinode-593099 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (445.36s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (48.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-593099
multinode_test.go:480: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-593099-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-593099-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (82.495283ms)

                                                
                                                
-- stdout --
	* [multinode-593099-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17740
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17740-63652/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17740-63652/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-593099-m02' is duplicated with machine name 'multinode-593099-m02' in profile 'multinode-593099'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-593099-m03 --driver=kvm2  --container-runtime=crio
E1206 19:27:54.633084   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/functional-317483/client.crt: no such file or directory
multinode_test.go:488: (dbg) Done: out/minikube-linux-amd64 start -p multinode-593099-m03 --driver=kvm2  --container-runtime=crio: (46.91989722s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-593099
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-593099: exit status 80 (230.79238ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-593099
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-593099-m03 already exists in multinode-593099-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-593099-m03
multinode_test.go:500: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-593099-m03: (1.022975715s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (48.31s)

                                                
                                    
x
+
TestScheduledStopUnix (118.49s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-012034 --memory=2048 --driver=kvm2  --container-runtime=crio
E1206 19:33:22.657206   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/addons-463584/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-012034 --memory=2048 --driver=kvm2  --container-runtime=crio: (46.643892171s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-012034 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-012034 -n scheduled-stop-012034
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-012034 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-012034 --cancel-scheduled
E1206 19:33:54.572788   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-012034 -n scheduled-stop-012034
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-012034
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-012034 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-012034
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-012034: exit status 7 (83.45665ms)

                                                
                                                
-- stdout --
	scheduled-stop-012034
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-012034 -n scheduled-stop-012034
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-012034 -n scheduled-stop-012034: exit status 7 (76.35072ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-012034" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-012034
--- PASS: TestScheduledStopUnix (118.49s)

                                                
                                    
x
+
TestKubernetesUpgrade (203.16s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-894931 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-894931 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m7.139190263s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-894931
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-894931: (2.216585754s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-894931 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-894931 status --format={{.Host}}: exit status 7 (85.510785ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-894931 --memory=2200 --kubernetes-version=v1.29.0-rc.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1206 19:37:54.631943   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/functional-317483/client.crt: no such file or directory
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-894931 --memory=2200 --kubernetes-version=v1.29.0-rc.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m31.320620879s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-894931 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-894931 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-894931 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio: exit status 106 (113.96529ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-894931] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17740
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17740-63652/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17740-63652/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.1 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-894931
	    minikube start -p kubernetes-upgrade-894931 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8949312 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.1, by running:
	    
	    minikube start -p kubernetes-upgrade-894931 --kubernetes-version=v1.29.0-rc.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-894931 --memory=2200 --kubernetes-version=v1.29.0-rc.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-894931 --memory=2200 --kubernetes-version=v1.29.0-rc.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (41.063385679s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-894931" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-894931
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-894931: (1.155959672s)
--- PASS: TestKubernetesUpgrade (203.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-411397 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-411397 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (104.544892ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-411397] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17740
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17740-63652/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17740-63652/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (111.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-411397 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-411397 --driver=kvm2  --container-runtime=crio: (1m51.004788959s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-411397 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (111.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-459609 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-459609 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (114.402601ms)

                                                
                                                
-- stdout --
	* [false-459609] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17740
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17740-63652/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17740-63652/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 19:35:02.087887   94516 out.go:296] Setting OutFile to fd 1 ...
	I1206 19:35:02.088145   94516 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 19:35:02.088157   94516 out.go:309] Setting ErrFile to fd 2...
	I1206 19:35:02.088161   94516 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 19:35:02.088358   94516 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17740-63652/.minikube/bin
	I1206 19:35:02.089005   94516 out.go:303] Setting JSON to false
	I1206 19:35:02.090040   94516 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":8252,"bootTime":1701883050,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 19:35:02.090108   94516 start.go:138] virtualization: kvm guest
	I1206 19:35:02.092269   94516 out.go:177] * [false-459609] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1206 19:35:02.093779   94516 notify.go:220] Checking for updates...
	I1206 19:35:02.093787   94516 out.go:177]   - MINIKUBE_LOCATION=17740
	I1206 19:35:02.095200   94516 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 19:35:02.096641   94516 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17740-63652/kubeconfig
	I1206 19:35:02.098145   94516 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17740-63652/.minikube
	I1206 19:35:02.099480   94516 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 19:35:02.100811   94516 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 19:35:02.102643   94516 config.go:182] Loaded profile config "NoKubernetes-411397": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 19:35:02.102748   94516 config.go:182] Loaded profile config "force-systemd-env-443622": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 19:35:02.102830   94516 config.go:182] Loaded profile config "offline-crio-383530": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 19:35:02.102923   94516 driver.go:392] Setting default libvirt URI to qemu:///system
	I1206 19:35:02.140044   94516 out.go:177] * Using the kvm2 driver based on user configuration
	I1206 19:35:02.141417   94516 start.go:298] selected driver: kvm2
	I1206 19:35:02.141432   94516 start.go:902] validating driver "kvm2" against <nil>
	I1206 19:35:02.141449   94516 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 19:35:02.143442   94516 out.go:177] 
	W1206 19:35:02.144686   94516 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1206 19:35:02.145890   94516 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-459609 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-459609

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-459609

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-459609

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-459609

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-459609

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-459609

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-459609

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-459609

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-459609

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-459609

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459609"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459609"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459609"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-459609

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459609"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459609"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-459609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-459609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-459609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-459609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-459609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-459609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-459609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-459609" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459609"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459609"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459609"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459609"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459609"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-459609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-459609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-459609" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459609"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459609"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459609"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459609"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459609"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-459609

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459609"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459609"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459609"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459609"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459609"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459609"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459609"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459609"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459609"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459609"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459609"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459609"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459609"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459609"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459609"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459609"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459609"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459609"

                                                
                                                
----------------------- debugLogs end: false-459609 [took: 3.136450103s] --------------------------------
helpers_test.go:175: Cleaning up "false-459609" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-459609
--- PASS: TestNetworkPlugins/group/false (3.41s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.29s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.56s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-411397 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-411397 --no-kubernetes --driver=kvm2  --container-runtime=crio: (6.040544773s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-411397 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-411397 status -o json: exit status 2 (362.261663ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-411397","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-411397
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-411397: (1.158886772s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (7.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (28.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-411397 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-411397 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.839432713s)
--- PASS: TestNoKubernetes/serial/Start (28.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-411397 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-411397 "sudo systemctl is-active --quiet service kubelet": exit status 1 (206.688303ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-411397
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-411397: (1.487005929s)
--- PASS: TestNoKubernetes/serial/Stop (1.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (26.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-411397 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-411397 --driver=kvm2  --container-runtime=crio: (26.175760364s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (26.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-411397 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-411397 "sudo systemctl is-active --quiet service kubelet": exit status 1 (244.916658ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.25s)

                                                
                                    
x
+
TestPause/serial/Start (65.81s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-143164 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-143164 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m5.812521818s)
--- PASS: TestPause/serial/Start (65.81s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (63.99s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-143164 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1206 19:40:51.525833   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-143164 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m3.972931729s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (63.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (131.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-459609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-459609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (2m11.306063179s)
--- PASS: TestNetworkPlugins/group/auto/Start (131.31s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.42s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-936191
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (101.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-459609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-459609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m41.272570857s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (101.27s)

                                                
                                    
x
+
TestPause/serial/Pause (0.83s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-143164 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.83s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.3s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-143164 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-143164 --output=json --layout=cluster: exit status 2 (294.932894ms)

                                                
                                                
-- stdout --
	{"Name":"pause-143164","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-143164","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.30s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.88s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-143164 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.88s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.41s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-143164 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-143164 --alsologtostderr -v=5: (1.407914489s)
--- PASS: TestPause/serial/PauseAgain (1.41s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.09s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-143164 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-143164 --alsologtostderr -v=5: (1.088505884s)
--- PASS: TestPause/serial/DeletePaused (1.09s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (13.8s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (13.797603402s)
--- PASS: TestPause/serial/VerifyDeletedResources (13.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (110.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-459609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
E1206 19:42:54.631821   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/functional-317483/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-459609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m50.872526853s)
--- PASS: TestNetworkPlugins/group/calico/Start (110.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-nb4m6" [8e7cdc75-0bb0-4259-86b9-02d69d4df86d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.028900674s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (90.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-459609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
E1206 19:43:05.704294   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/addons-463584/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-459609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m30.953688998s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (90.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-459609 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-459609 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (13.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-459609 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-8kzgm" [5dd43b23-4a7c-4481-94b0-e0d5761d98af] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-8kzgm" [5dd43b23-4a7c-4481-94b0-e0d5761d98af] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 13.017461932s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (13.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-459609 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-gtkw6" [14a010a5-0617-4394-9dd1-5eefb1980431] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-gtkw6" [14a010a5-0617-4394-9dd1-5eefb1980431] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.017643429s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-459609 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-459609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-459609 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-459609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-459609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-459609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (68.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-459609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-459609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m8.071411172s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (68.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (120.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-459609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-459609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (2m0.137624162s)
--- PASS: TestNetworkPlugins/group/flannel/Start (120.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-bmvx4" [42ab8518-298a-4ded-80a8-26e9d8acab9e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.037092851s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-459609 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-459609 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-gnxbc" [eb344c0a-620e-4c52-b170-27bbbdc01f2a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-gnxbc" [eb344c0a-620e-4c52-b170-27bbbdc01f2a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.060008664s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-459609 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-459609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-459609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-459609 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-459609 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-jdt9k" [c267567e-d8fd-406e-a48b-0eebbcea31d8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-jdt9k" [c267567e-d8fd-406e-a48b-0eebbcea31d8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.011403682s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (112.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-459609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-459609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m52.525685918s)
--- PASS: TestNetworkPlugins/group/bridge/Start (112.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-459609 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-459609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-459609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-459609 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-459609 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-r8fnr" [dc9ae481-270d-4782-bec2-938a55c31689] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-r8fnr" [dc9ae481-270d-4782-bec2-938a55c31689] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.027784546s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (26.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-459609 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context enable-default-cni-459609 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.212593744s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-459609 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Done: kubectl --context enable-default-cni-459609 exec deployment/netcat -- nslookup kubernetes.default: (10.848014906s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (26.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (148.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-448851 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-448851 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (2m28.376722675s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (148.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-459609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-459609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-9vm9w" [8d719d3b-1f76-45f1-ad81-8b9d59fd77f7] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.183233937s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (96.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-989559 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-989559 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.1: (1m36.565212577s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (96.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-459609 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (13.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-459609 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-5vlgl" [4b680581-a3c9-4453-9cee-d8df0c710541] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1206 19:45:51.525904   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-5vlgl" [4b680581-a3c9-4453-9cee-d8df0c710541] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.020872492s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (13.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-459609 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-459609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-459609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (108.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-209025 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-209025 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (1m48.459965364s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (108.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-459609 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (13.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-459609 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-8l6pf" [0b44059b-2776-47e6-bded-0eefb98bb233] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-8l6pf" [0b44059b-2776-47e6-bded-0eefb98bb233] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 13.010607682s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (13.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-459609 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-459609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-459609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)
E1206 20:15:51.525089   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (64.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-380424 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-380424 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (1m4.390274815s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (64.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-989559 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [73861515-9ff9-459b-888d-b551bd3eac06] Pending
helpers_test.go:344: "busybox" [73861515-9ff9-459b-888d-b551bd3eac06] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [73861515-9ff9-459b-888d-b551bd3eac06] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.02434932s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-989559 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-989559 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-989559 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.103128166s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-989559 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-448851 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [192902a7-080f-4a53-99b1-35b4885c1038] Pending
helpers_test.go:344: "busybox" [192902a7-080f-4a53-99b1-35b4885c1038] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1206 19:47:37.681089   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/functional-317483/client.crt: no such file or directory
helpers_test.go:344: "busybox" [192902a7-080f-4a53-99b1-35b4885c1038] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.037508504s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-448851 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-448851 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-448851 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-380424 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3829d1ee-202b-4b66-8fde-d596bd25ecc4] Pending
E1206 19:48:04.764415   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/kindnet-459609/client.crt: no such file or directory
helpers_test.go:344: "busybox" [3829d1ee-202b-4b66-8fde-d596bd25ecc4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3829d1ee-202b-4b66-8fde-d596bd25ecc4] Running
E1206 19:48:08.166892   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/auto-459609/client.crt: no such file or directory
E1206 19:48:08.172170   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/auto-459609/client.crt: no such file or directory
E1206 19:48:08.182515   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/auto-459609/client.crt: no such file or directory
E1206 19:48:08.202840   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/auto-459609/client.crt: no such file or directory
E1206 19:48:08.243136   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/auto-459609/client.crt: no such file or directory
E1206 19:48:08.323533   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/auto-459609/client.crt: no such file or directory
E1206 19:48:08.483731   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/auto-459609/client.crt: no such file or directory
E1206 19:48:08.804408   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/auto-459609/client.crt: no such file or directory
E1206 19:48:09.445290   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/auto-459609/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.029634793s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-380424 exec busybox -- /bin/sh -c "ulimit -n"
E1206 19:48:12.445852   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/kindnet-459609/client.crt: no such file or directory
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-209025 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [dfbdbb8e-9924-45fe-b4a4-7dc35808aa68] Pending
helpers_test.go:344: "busybox" [dfbdbb8e-9924-45fe-b4a4-7dc35808aa68] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1206 19:48:07.324640   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/kindnet-459609/client.crt: no such file or directory
helpers_test.go:344: "busybox" [dfbdbb8e-9924-45fe-b4a4-7dc35808aa68] Running
E1206 19:48:10.726144   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/auto-459609/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.025372104s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-209025 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.46s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-380424 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1206 19:48:13.286373   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/auto-459609/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-380424 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.035233398s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-380424 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-209025 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-209025 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.143820298s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-209025 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (671.9s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-989559 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-989559 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.1: (11m11.607285711s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-989559 -n no-preload-989559
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (671.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (701.05s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-448851 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-448851 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (11m40.768305332s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-448851 -n old-k8s-version-448851
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (701.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (604.69s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-380424 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-380424 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (10m4.349664948s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-380424 -n default-k8s-diff-port-380424
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (604.69s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (622.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-209025 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
E1206 19:50:51.525117   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/client.crt: no such file or directory
E1206 19:50:52.010222   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/auto-459609/client.crt: no such file or directory
E1206 19:50:52.321209   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/flannel-459609/client.crt: no such file or directory
E1206 19:50:56.291486   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/custom-flannel-459609/client.crt: no such file or directory
E1206 19:51:02.561446   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/flannel-459609/client.crt: no such file or directory
E1206 19:51:10.965920   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/enable-default-cni-459609/client.crt: no such file or directory
E1206 19:51:23.041776   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/flannel-459609/client.crt: no such file or directory
E1206 19:51:27.860985   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/bridge-459609/client.crt: no such file or directory
E1206 19:51:27.866270   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/bridge-459609/client.crt: no such file or directory
E1206 19:51:27.876546   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/bridge-459609/client.crt: no such file or directory
E1206 19:51:27.896821   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/bridge-459609/client.crt: no such file or directory
E1206 19:51:27.937143   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/bridge-459609/client.crt: no such file or directory
E1206 19:51:28.017523   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/bridge-459609/client.crt: no such file or directory
E1206 19:51:28.178079   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/bridge-459609/client.crt: no such file or directory
E1206 19:51:28.499093   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/bridge-459609/client.crt: no such file or directory
E1206 19:51:29.140275   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/bridge-459609/client.crt: no such file or directory
E1206 19:51:30.420832   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/bridge-459609/client.crt: no such file or directory
E1206 19:51:32.981458   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/bridge-459609/client.crt: no such file or directory
E1206 19:51:38.101662   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/bridge-459609/client.crt: no such file or directory
E1206 19:51:42.638621   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/calico-459609/client.crt: no such file or directory
E1206 19:51:48.342031   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/bridge-459609/client.crt: no such file or directory
E1206 19:52:04.002542   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/flannel-459609/client.crt: no such file or directory
E1206 19:52:08.823152   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/bridge-459609/client.crt: no such file or directory
E1206 19:52:18.212347   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/custom-flannel-459609/client.crt: no such file or directory
E1206 19:52:32.886562   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/enable-default-cni-459609/client.crt: no such file or directory
E1206 19:52:49.784292   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/bridge-459609/client.crt: no such file or directory
E1206 19:52:54.631361   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/functional-317483/client.crt: no such file or directory
E1206 19:53:02.203725   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/kindnet-459609/client.crt: no such file or directory
E1206 19:53:08.167114   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/auto-459609/client.crt: no such file or directory
E1206 19:53:22.656969   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/addons-463584/client.crt: no such file or directory
E1206 19:53:25.923340   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/flannel-459609/client.crt: no such file or directory
E1206 19:53:29.889278   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/kindnet-459609/client.crt: no such file or directory
E1206 19:53:35.851435   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/auto-459609/client.crt: no such file or directory
E1206 19:53:58.794284   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/calico-459609/client.crt: no such file or directory
E1206 19:54:11.704502   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/bridge-459609/client.crt: no such file or directory
E1206 19:54:26.479381   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/calico-459609/client.crt: no such file or directory
E1206 19:54:34.367646   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/custom-flannel-459609/client.crt: no such file or directory
E1206 19:54:49.042873   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/enable-default-cni-459609/client.crt: no such file or directory
E1206 19:55:02.053008   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/custom-flannel-459609/client.crt: no such file or directory
E1206 19:55:16.727626   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/enable-default-cni-459609/client.crt: no such file or directory
E1206 19:55:42.080745   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/flannel-459609/client.crt: no such file or directory
E1206 19:55:51.525956   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/client.crt: no such file or directory
E1206 19:56:09.763582   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/flannel-459609/client.crt: no such file or directory
E1206 19:56:27.859741   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/bridge-459609/client.crt: no such file or directory
E1206 19:56:55.545259   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/bridge-459609/client.crt: no such file or directory
E1206 19:57:54.631989   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/functional-317483/client.crt: no such file or directory
E1206 19:58:02.203816   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/kindnet-459609/client.crt: no such file or directory
E1206 19:58:08.167512   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/auto-459609/client.crt: no such file or directory
E1206 19:58:22.656896   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/addons-463584/client.crt: no such file or directory
E1206 19:58:58.794236   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/calico-459609/client.crt: no such file or directory
E1206 19:59:34.367271   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/custom-flannel-459609/client.crt: no such file or directory
E1206 19:59:45.705530   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/addons-463584/client.crt: no such file or directory
E1206 19:59:49.042507   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/enable-default-cni-459609/client.crt: no such file or directory
E1206 20:00:42.080614   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/flannel-459609/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-209025 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (10m21.844096318s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-209025 -n embed-certs-209025
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (622.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (59.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-347168 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.1
E1206 20:15:42.081550   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/flannel-459609/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-347168 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.1: (59.357630029s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (59.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.63s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-347168 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-347168 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.629761426s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.63s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (333.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-347168 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.1
E1206 20:18:45.266498   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/no-preload-989559/client.crt: no such file or directory
E1206 20:18:45.312748   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/default-k8s-diff-port-380424/client.crt: no such file or directory
E1206 20:18:56.184380   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/old-k8s-version-448851/client.crt: no such file or directory
E1206 20:18:58.794498   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/calico-459609/client.crt: no such file or directory
E1206 20:19:26.272996   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/default-k8s-diff-port-380424/client.crt: no such file or directory
E1206 20:19:34.367387   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/custom-flannel-459609/client.crt: no such file or directory
E1206 20:19:49.041830   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/enable-default-cni-459609/client.crt: no such file or directory
E1206 20:20:07.187190   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/no-preload-989559/client.crt: no such file or directory
E1206 20:20:18.105266   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/old-k8s-version-448851/client.crt: no such file or directory
E1206 20:20:42.080835   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/flannel-459609/client.crt: no such file or directory
E1206 20:20:48.194088   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/default-k8s-diff-port-380424/client.crt: no such file or directory
E1206 20:20:51.525373   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/client.crt: no such file or directory
E1206 20:20:57.682877   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/functional-317483/client.crt: no such file or directory
E1206 20:21:05.250581   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/kindnet-459609/client.crt: no such file or directory
E1206 20:21:11.212957   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/auto-459609/client.crt: no such file or directory
E1206 20:21:27.860651   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/bridge-459609/client.crt: no such file or directory
E1206 20:22:01.841351   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/calico-459609/client.crt: no such file or directory
E1206 20:22:23.343840   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/no-preload-989559/client.crt: no such file or directory
E1206 20:22:34.262265   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/old-k8s-version-448851/client.crt: no such file or directory
E1206 20:22:37.415229   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/custom-flannel-459609/client.crt: no such file or directory
E1206 20:22:51.028244   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/no-preload-989559/client.crt: no such file or directory
E1206 20:22:52.088117   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/enable-default-cni-459609/client.crt: no such file or directory
E1206 20:22:54.632123   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/functional-317483/client.crt: no such file or directory
E1206 20:23:01.946183   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/old-k8s-version-448851/client.crt: no such file or directory
E1206 20:23:02.203977   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/kindnet-459609/client.crt: no such file or directory
E1206 20:23:04.350893   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/default-k8s-diff-port-380424/client.crt: no such file or directory
E1206 20:23:08.167331   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/auto-459609/client.crt: no such file or directory
E1206 20:23:22.657375   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/addons-463584/client.crt: no such file or directory
E1206 20:23:32.034345   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/default-k8s-diff-port-380424/client.crt: no such file or directory
E1206 20:23:45.124638   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/flannel-459609/client.crt: no such file or directory
E1206 20:23:54.575206   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/ingress-addon-legacy-283223/client.crt: no such file or directory
E1206 20:23:58.793854   70834 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-63652/.minikube/profiles/calico-459609/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-347168 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.1: (5m32.963492007s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-347168 -n newest-cni-347168
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (333.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-347168 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.5s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-347168 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-347168 -n newest-cni-347168
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-347168 -n newest-cni-347168: exit status 2 (252.111073ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-347168 -n newest-cni-347168
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-347168 -n newest-cni-347168: exit status 2 (250.407915ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-347168 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-347168 -n newest-cni-347168
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-347168 -n newest-cni-347168
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.50s)

                                                
                                    

Test skip (39/305)

Order skiped test Duration
5 TestDownloadOnly/v1.16.0/cached-images 0
6 TestDownloadOnly/v1.16.0/binaries 0
7 TestDownloadOnly/v1.16.0/kubectl 0
12 TestDownloadOnly/v1.28.4/cached-images 0
13 TestDownloadOnly/v1.28.4/binaries 0
14 TestDownloadOnly/v1.28.4/kubectl 0
19 TestDownloadOnly/v1.29.0-rc.1/cached-images 0
20 TestDownloadOnly/v1.29.0-rc.1/binaries 0
21 TestDownloadOnly/v1.29.0-rc.1/kubectl 0
25 TestDownloadOnlyKic 0
39 TestAddons/parallel/Olm 0
51 TestDockerFlags 0
54 TestDockerEnvContainerd 0
56 TestHyperKitDriverInstallOrUpdate 0
57 TestHyperkitDriverSkipUpgrade 0
108 TestFunctional/parallel/DockerEnv 0
109 TestFunctional/parallel/PodmanEnv 0
146 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
147 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
148 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
149 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
150 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
151 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
152 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
153 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
157 TestGvisorAddon 0
158 TestImageBuild 0
191 TestKicCustomNetwork 0
192 TestKicExistingNetwork 0
193 TestKicCustomSubnet 0
194 TestKicStaticIP 0
226 TestChangeNoneUser 0
229 TestScheduledStopWindows 0
231 TestSkaffold 0
233 TestInsufficientStorage 0
237 TestMissingContainerUpgrade 0
242 TestNetworkPlugins/group/kubenet 3.4
251 TestNetworkPlugins/group/cilium 4.3
266 TestStartStop/group/disable-driver-mounts 0.17
x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:213: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-459609 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-459609

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-459609

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-459609

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-459609

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-459609

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-459609

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-459609

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-459609

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-459609

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-459609

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459609"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459609"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459609"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-459609

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459609"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459609"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-459609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-459609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-459609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-459609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-459609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-459609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-459609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-459609" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459609"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459609"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459609"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459609"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459609"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-459609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-459609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-459609" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459609"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459609"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459609"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459609"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459609"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-459609

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459609"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459609"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459609"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459609"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459609"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459609"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459609"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459609"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459609"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459609"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459609"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459609"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459609"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459609"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459609"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459609"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459609"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459609"

                                                
                                                
----------------------- debugLogs end: kubenet-459609 [took: 3.235272749s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-459609" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-459609
--- SKIP: TestNetworkPlugins/group/kubenet (3.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-459609 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-459609

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-459609

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-459609

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-459609

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-459609

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-459609

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-459609

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-459609

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-459609

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-459609

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459609"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459609"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459609"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-459609

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459609"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459609"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-459609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-459609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-459609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-459609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-459609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-459609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-459609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-459609" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459609"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459609"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459609"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459609"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459609"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-459609

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-459609

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-459609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-459609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-459609

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-459609

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-459609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-459609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-459609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-459609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-459609" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459609"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459609"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459609"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459609"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459609"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-459609

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459609"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459609"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459609"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459609"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459609"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459609"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459609"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459609"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459609"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459609"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459609"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459609"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459609"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459609"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459609"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459609"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459609"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-459609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459609"

                                                
                                                
----------------------- debugLogs end: cilium-459609 [took: 4.14531855s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-459609" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-459609
--- SKIP: TestNetworkPlugins/group/cilium (4.30s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-730405" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-730405
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard